Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Luminar’s largest customer: Tesla

This site may earn commission on affiliate links.
...The irony is that it's becoming more and more evident (at least to me) that Pure Vision will not succeed in reaching L4 by itself, at least not for another 8-10 years. Adding lidar + radars to the sensor suite could likely accelerate Tesla L4 by several years, by shoring up the inherent deficiencies of pure vision, which are mostly to do with difficult environmental conditions or compromised image quality, exacerbated by the camera lenses being immovable and pressed right up against the glass (unlike human heads).

So the idea that Tesla may add lidar + radar back to HW5 next year (and I expect they very likely will to Robotaxi), ...
More and more people are starting to realize that their current tech stack will not be suitable for full robotaxi autonomy. But it isn't 8-10 years it will never be suitable, there are just too many problems. Also keep in mind to function as a robotaxi will require level 5 not level 4, you have to "solve" self driving to 100%, not 95%, not 99%, not 99.9%...100% That "solving" is an exponential curve of difficulty, if Tesla is incredibly lucky they are about half way there (AND they need to change the hardware to accomodate the shortcomings with the current hardware.) So...never with the current tech stack.

The catch 22 that they are in is their current solution cannot work (and even if Elon is in denial there absolutely are members of the FSD software team who know this) and they cannot admit this because of the liability of having sold FSD. They can't just add an extra $1k+ in hardware for a robotaxi vehicle because then anyone who bought FSD is going to be owed that hardware.

No matter how you look at it FSD can only end in a class action lawsuit against Tesla costing the company tens or hundreds of millions.
 
  • Like
Reactions: Ben W
Also keep in mind to function as a robotaxi will require level 5 not level 4, you have to "solve" self driving to 100%, not 95%, not 99%, not 99.9%...100%

I got to disagree there. You don't need level 5 to do robotaxis. We already have robotaxis on public roads like Waymo and they are level 4. Remember you still need to "solve" self-driving to 99.9999% to do level 4. Both level 4 and level 5 require 99.9999% reliability. The only difference between level 4 and level 5 is the ODD. Level 4 just means that there is a limit on the ODD, like a geofence. Level 5 is the same as level 4 except it needs to work everywhere, in all conditions humans can drive in.

Achieving 99.9999% reliability in a geofence is a lot easier than achieving 99.9999% everywhere, in all conditions. So it makes sense to do a L4 robotaxi before a L5 robotaxi. Plus, robotaxis usually only need to provide local rides, so geofencing makes perfect sense. You can geofence an area with a lot of customers that need to go to places inside that geofence and you can provide a meaningful ride-hailing service. You don't need a robotaxi to go everywhere. Most people don't summon a robotaxi to go hundreds of miles. So level 5 is not needed for robotaxis.

Lastly, "solving" self-driving to 100% is not realistic. Edge cases are infinite. There will always be unsolved edge cases even if very rare. And you can never guarantee zero failures. Hardware and software will have bugs. You just need to make them as rare as possible. You do need to get very close to 100%. That is why I say you need to achieve 99.9999% reliability.
 
There's no evidence here. I strongly doubt this.

There are Green videos of AEB activating and the vehicle slamming into obstacles that were visible well in advance.
It would be instructive to set up a truly controlled test for this, e.g. to have a mannequin pop out from behind a parked car and measure the time until the car reacts. I've seen 300-400ms figures mentioned in several places for Tesla's reaction time, though there's no substitute for measuring it in an actual scenario, then giving humans the same test and seeing which is more effective.
Just based on disengaging routinely before the car does anything.
The irony here is that _because_ the car has a faster emergency reaction time, it can safely take more time to react in non-emergency situations. I proactively disengage all the time, but it doesn't mean the car's behavior (what it would have done had I not disengaged) isn't also correct or safe. Sometimes it does make obvious mistakes, admittedly, but that's different from having a slow reaction time.
FSDS is better than earlier iterations, but it's just on par (best case) with an alert human driver. That's all that I've seen & extremely carefully and rigorously documented elsewhere here.

It’s way slower from noticing the yellow to applying the brakes than a human is even with identical reaction times. It just does not react quickly and eases it in over time even though it has perceived the light change. Not surprising of course.
If the car has time to slow down gradually, there's no reason for it to start braking instantly. So this is not evidence of slow reaction time, just of a different driving style than you have.
Humans are super quick because as soon as that accelerator is released massive braking ensues.
Tesla regenerative braking (lifting the accelerator) is at best about 1/5 as strong as fully-pressed brakes. (~0.2g vs ~1g.) So I'm not sure I would call that "massive", especially in emergency situations. It's nice for non-emergency one-pedal driving, though!
 
  • Like
Reactions: Zythryn
All I know is, TACC and AP still don't work - I get regular phantom braking. I tried FSD when they gave us the 1 month trial and it was painful. Like "15 year old the day they get their permit" level of crappiness. Constant ping-ponging in the lane, always reacting to things that could clearly be seen way ahead of time. Changing lanes right in front of someone coming up fast from behind, nudging itself out into an intersection 2 feet at a time because it can't decide if it's safe or not, and on and on.

While you are arguing about reaction time, the problem is that FSD cannot anticipate anything. It waits until the very last second so it requires 300ms of reaction time when in reality, any reasonably trained driver would have seen the problem 10 seconds earlier.

Does reaction time matter if a person or animal runs out between 2 parked cars? Yes, but those instances are extremely rare and can often be anticipated by an aware human driver - you can see that kids are playing in the front yard, slow down and bias towards the center of the lane. FSD doesn't do any of that. It just plows ahead and then panic-reacts to whatever happens.

As I have posted elsewhere, I do look forward to L4 to help people that can't drive any more (my parents are in the 80s and that time is coming) and even for myself so I can avoid the hassle of figuring out how to retrieve my car in the morning because I had too many drinks the night before. But FSD will remain a parlor trick for the foreseeable future. It's also painfully slow.

I will add one counterpoint. Those of us that take driving seriously (I have done advanced driving schools, I used to roadrace motorcycles, etc) observe FSD and know that it is absolute garbage (for all of the reasons I stated above). But when compared to an average driver that is texting or watching a movie, it may be better. For now I will keep driving my car. I just wish Tesla would let me have a dumb, hold a set speed no matter what, cruise control that I could use on long trips.
 
The irony here is that _because_ the car has a faster emergency reaction time, it can safely take more time to react in non-emergency situations. I proactively disengage all the time, but it doesn't mean the car's behavior (what it would have done had I not disengaged) isn't also correct or safe. Sometimes it does make obvious mistakes, admittedly, but that's different from having a slow reaction time.
No.
If the car has time to slow down gradually, there's no reason for it to start braking instantl
Actually braking instantly means a much more gradual slowdown.
 
  • Like
Reactions: GSP
Sure, but it’s not a safety or correctness issue; either driving style is safe and correct. Your preference may be different from FSD’s, but neither is wrong.
This is incorrect. Margin is important. Smoothness and margin are important to avoid rear-end collisions.

At least 95% (roughly) of evasive action occurs before anything happens. Just the way it works and why humans are so darn good. Negative reaction time!

It would be interesting to know how many deaths/accidents there are due to lack of human capability. I would guess it is a very small percentage of our accidents and deaths. That’s the challenge for FSD.

The incredible thing about humans is how many accidents are avoided! It is likely some astronomical number. Would be interesting to know that too. Incredibly high bar.
 
Last edited:
  • Like
Reactions: GSP
Getting back to the topic of lidar and regulations, Mobileye’s CEO recently posted this on X:
IMG_1038.jpeg

Promising for Luminar’s business if true. Luminar stock took a sudden big dip today though, not sure why?
 
I can definitely see lidar lobbyists making it mandatory to have LiDAR for AV, and to hurt Elon
Or the objective fact that no one is anywhere close to L3 without it.
I bet Elon is secretly hoping the regulators will do this, so that he can integrate LiDAR while saving face.
Agreed. Elon can blame the govt instead of having to admit he was wrong (I'm still shocked that he caved on the wheel for the S/X) and it avoids thousands of lawsuits from people who bought FSD and will never receive it.
 
  • Like
Reactions: Ben W
I find the quantity that Tesla have bought interesting. Too many for "just trying out in prototypes" and too few for production.

One thing that struck me was Tesla using LIDAR equipped cars to help train the NNs for FSD (camera only). Right now, Tesla get a lot of data from all the testers, BUT they have to study the results manually. However, if they deploy a test fleet with LIDAR, then they can use the Lidar to validate the results of the camera NNs. Something like "Cameras say vertical wall over there 5 meters away" and see if Lidar concurs. If it doesnt, they get instant feedback about the NNs working or failing, and this without any manual intervention/analysis. This is MUCH more cost effective than manual labeling (or even auto labeling).
 
  • Like
Reactions: enemji