Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What does your timeline look like for driverless vehicles?

This site may earn commission on affiliate links.
Yes. The driverless rides are only for employees right now. Hence, why I said they are limited.
No - even FSD Beta Early Access is now limited. Limited doesn't mean for employees only.

When its only employees - there is zero transparency and Cruise has 100% control. So, if there are problems, nobody will ever report on those.

Ofcourse, with NDA there is little transparency too - but its a little better than employees only.
 
No - even FSD Beta Early Access is now limited. Limited doesn't mean for employees only.

Let's not quibble over the meaning of the word "limited". I edited my post for clarification.

When its only employees - there is zero transparency and Cruise has 100% control. So, if there are problems, nobody will ever report on those.

Ok but that is a different issue. Cruise is doing internal employee-only testing before they do public testing.
 
Last edited:
Last edited:
  • Informative
Reactions: diplomat33
This is when the hardware will be ready but not the software. Should we estimate 2027 or 2028 for L4 from Mobileye?

We don't know when the software will be ready but I think Mobileye is hoping that the software will be ready by 2025. That timeline gives them 3 years get the software ready. In a recent interview with Brad Templeton, Shashua says that their camera-only system will be "way above" 1,000 hours of driving between safety failure that leads to an accident by the end of this year. We know Mobileye's target is 10,000 hours of driving between safety failure. So Mobileye basically has 3 years to get their vision-only system from "way above 1,000" to 10,000 MTBF. That timeline might be optimistic but not unrealistic IMO. I don't know the MTBF of the radar/lidar system but I think we can assume it is probably somewhere in between 1,000 and 10,000 MTBF since Mobileye is planning to deploy L4 with supervision later this year. Basically, Mobileye is giving themselves 3 years to get each system from above 1,000 MTBF to 10,000 MTBF. Personally, I think 2025-2026 sounds like a good timeline for Mobileye to have L4 without driver supervision ready.
 
I don't know the MTBF of the radar/lidar system but I think we can assume it is probably somewhere in between 1,000 and 10,000 MTBF since Mobileye is planning to deploy L4 with supervision later this year. Basically, Mobileye is giving themselves 3 years to get each system from above 1,000 MTBF to 10,000 MTBF.
What am I missing? 10,000 hours is about 300,000 miles, which is way below human performance. I'd think they would be targeting an order of magnitude better than humans not one magnitude worse before release.

And what is "supervised L4"? Isn't that basically L2? Why can't Tesla call what they have today supervised L4?
 
What am I missing? 10,000 hours is about 300,000 miles, which is way below human performance. I'd think they would be targeting an order of magnitude better than humans not one magnitude worse before release.

10,000 hours of driving per safety failure is only the MTBF of each subsystem, not the MTBF of the entire system. Remember, Mobileye's "true redundancy" approach is that each subsystem would be 10,000 hours per safety failure but when combined, the total MTBF would be on the order of 10M hours of driving per safety failure. So the L4 system (camera vision + radar/lidar) would have a MTBF of about 10M hours of driving per safety driver, much safer than humans. They believe this because they are doing late sensor fusion where the two subsystems are almost independent. There will be cases where if the camera vision has a false negative, the radar/lidar system will not. So the combined system will have less failures than a subsystem by itself.

And what is "supervised L4"? Isn't that basically L2?

No. This is a common misconception. Per SAE, L4 and L2 are entirely different systems. L2 cannot do entire OEDR, L4 can do entire OEDR. So L2 needs a human driver to handle some of the OEDR tasks. L4 can do entire OEDR tasks but may have a safety driver if the reliability is still low. So L2 is not simply L4 that still needs driver supervision because L2 fundamentally cannot do entire OEDR whereas L4 can. Hope that is clear.

"Supervised L4" is what Waymo and Cruise are doing. They have L4 (system performs entire OEDR) but they have a safety driver in some cases for testing purposes.

Why can't Tesla call what they have today supervised L4?

I guess they could. But if Tesla did that, they would be saying that FSD Beta can perform entire OEDR and does not require a human driver to operate. And they would need to report disengagements to the CA DMV per regulations. Tesla says that FSD Beta is only L2 and that FSD Beta cannot do entire OEDR. Presumably, when FSD Beta can do entire OEDR, Tesla will say that it is L4.
 
Last edited:
  • Like
Reactions: Terminator857
So the L4 system (camera vision + radar/lidar) would have a MTBF of about 10M hours of driving per safety driver, much safer than humans.
This math only works if each system is 100% capable of the task by itself, and uses completely independent logic. The current challenge in autonomy is not the hardware reliability, but the software/algorithm capability. In reality, combining two "independent" systems never ends up with simple combinational math given common mode failures and other similarities. One of the tasks any system that gets to human levels will have to handle is a tire blowout or loss of propulsion- which dealing with has very little to do with perception sensor modalities.

There will be cases where if the camera vision has a false negative, the radar/lidar system will not. So the combined system will have less failures than a subsystem by itself.
You have to do really, really careful analysis to know if this is true. On top of this, by doing an "OR" system, it now means you have doubled your false positive rate- either system telling you to do something must be obeyed, and this causes really complex issues when one is telling you to steer out of your lane to not hit an object and the other is telling you there's an oncoming car in that lane. Which one do you trust? What do you do when the camera makes you do an emergency stop for a plastic bag in the road? The fact that they are doing sensor "fusion" means there is an algorithm there that is not just an "OR" and the system reliability comes from that algorithm's performance, not just combinations of sensor MTBFs.

Per SAE, L4 and L2 are entirely different systems. L2 cannot do entire OEDR, L4 can do entire OEDR. So L2 needs a human driver to handle some of the OEDR tasks. L4 can do entire OEDR tasks but may have a safety driver if the reliability is still low.
Yeah, I see the point there. The issue here is that most people would consider "don't kill me very often" as one of the core OEDR tasks. It gets down to the level of what is the threshold for claiming that you can do a task if you do it very poorly/unreliably? Technically, every car is self driving and has some statistical chance of making it to the destination if left untouched ;)

I would think that any system which still requires a human safety monitor is NOT an L4 system. It's a system which is targeted at L4, but is currently under development. That avoids the public thinking there are use-able L4 systems out there, just like some people have confused Tesla's "Full Self Driving" as if it's an L4, no driver needed system.
 
Elon posts his future timeline:
2022: FSD Beta becomes L4 capable; released to general public early next year; robotaxis coming by the end of 2023
2023: FSD Beta becomes L4 capable; released to general public early next year; robotaxis coming by the end of 2024
2024: FSD Beta becomes L4 capable; released to general public early next year; robotaxis coming by the end of 2025
2025: FSD Beta becomes L4 capable; released to general public early next year; robotaxis coming by the end of 2026
...
 
This math only works if each system is 100% capable of the task by itself, and uses completely independent logic. The current challenge in autonomy is not the hardware reliability, but the software/algorithm capability. In reality, combining two "independent" systems never ends up with simple combinational math given common mode failures and other similarities. One of the tasks any system that gets to human levels will have to handle is a tire blowout or loss of propulsion- which dealing with has very little to do with perception sensor modalities.

You have to do really, really careful analysis to know if this is true. On top of this, by doing an "OR" system, it now means you have doubled your false positive rate- either system telling you to do something must be obeyed, and this causes really complex issues when one is telling you to steer out of your lane to not hit an object and the other is telling you there's an oncoming car in that lane. Which one do you trust? What do you do when the camera makes you do an emergency stop for a plastic bag in the road? The fact that they are doing sensor "fusion" means there is an algorithm there that is not just an "OR" and the system reliability comes from that algorithm's performance, not just combinations of sensor MTBFs.

I would recommend watching the interview with Brad and Ammon. They discuss this issue in some detail.

It starts here:

Yeah, I see the point there. The issue here is that most people would consider "don't kill me very often" as one of the core OEDR tasks. It gets down to the level of what is the threshold for claiming that you can do a task if you do it very poorly/unreliably? Technically, every car is self driving and has some statistical chance of making it to the destination if left untouched ;)

"don't kill me very often" is not really an OEDR task, it's more of a safety issue of how well the car performs the OEDR. OEDR refers to specific tasks like handling traffic lights and stop signs, avoiding obstacles, yielding to pedestrians, etc...

And I know you are being facetious but every car is not self-driving. The car needs to perform the entire DDT to be self-driving.



I would think that any system which still requires a human safety monitor is NOT an L4 system. It's a system which is targeted at L4, but is currently under development. That avoids the public thinking there are use-able L4 systems out there, just like some people have confused Tesla's "Full Self Driving" as if it's an L4, no driver needed system.

The SAE defines WHAT L4 is. So if the system can perform the entire DDT and OEDR, then it is L4/L5, regardless of how well or not it performs the DDT. Of course, how good the L4 is, is what the dev team will work on in order to deploy the L4.

And the reason L4 cars have safety drivers during development is because while they may do the entire OEDR, they may not do it well enough in a given situation. But that does not make it less L4. It is still performing the entire OEDR.
 
I would recommend watching the interview with Brad and Ammon. They discuss this issue in some detail.
They do! And he quite correctly states they are not statistically independent, so there's not even an attempt to claim 10M hours. Nobody actually knows.

He also claims that false positives are just a "comfort" issue not a safety issue, which everyone knows is false. Slamming on the brakes on the highway for an object that is not there absolutely causes a safety concern.

Finally, there is no discussion about how they define "failure" in their MTBF. If you claim the only failure is the failure to detect an object that is there, then you are not defining your false positive rate. It's really easy to get great false negative rates if you don't consider false positives.

And the reason L4 cars have safety drivers during development is because while they may do the entire OEDR, they may not do it well enough in a given situation. But that does not make it less L4. It is still performing the entire OEDR.
Until it fails and needs that backup driver. Then it isn't performing the entire OEDR.

I can't sell a car and claim it can go 150 MPH if 25% of the time I do that, the wheels fall off and it kills the occupants. It seems very odd that the SAE doesn't have any kind of baseline for reliability before you can claim you handle that DDT and that you are "L4". This leaves a lot of room for companies to claim they are L4, when what they mean is that they managed to do it once. 2016 Tesla video anyone?

Aviation requires a specific reliability on a safety critical function before you can say you can do that function. You don't get to say you made a wing unless you can show that wing survives for thousands of hours. You don't get to say you made an artificial horizon unless it has specific reliability. You can't sell an engine that fails every 10 hours. The fact that the SAE leaves it completely up to the seller to define acceptable reliability independent of function really breaks all of these discussions, particularly when we all know we have humans as a baseline and it's pointless to sell autonomy that isn't as good as the average human.

I'm aware none of these are your definitions- and appreciate you explaining it. Just amazed that's where we are as an industry.
 
They do! And he quite correctly states they are not statistically independent, so there's not even an attempt to claim 10M hours. Nobody actually knows.

A MTBF of 10M is not claiming statistical independence. Statistical independence would be 10,000*10,000 or 100M, not 10M. So, Mobileye is claiming a MTBF of 10x less than statistical independence. So I see no contradiction there.

Finally, there is no discussion about how they define "failure" in their MTBF. If you claim the only failure is the failure to detect an object that is there, then you are not defining your false positive rate. It's really easy to get great false negative rates if you don't consider false positives.

Actually, Shashua does define "failure" in their MTBF. He says "failure" means leading to an accident.

Until it fails and needs that backup driver. Then it isn't performing the entire OEDR.

Not necessarily. You can perform the OEDR, just not in a way that is safe or best. For example, the autonomous car correctly detects a pedestrian and brakes to avoid the collision but brakes too hard and almost gets rear ended. The car did perform the OEDR task of avoiding the pedestrian but not in the best way. That's the whole reason why L4 test cars have safety drivers: the cars can drive on their own without a human but the driving is not good enough for the company to feel comfortable deploying it without supervision.

This leaves a lot of room for companies to claim they are L4, when what they mean is that they managed to do it once. 2016 Tesla video anyone?

Companies can make claims about a lot of products but they can get sued if it looks like they are selling products based on false claims. If you sell a L4 car, the driver will not supervise it, and if it gets into an at-fault accident, the manufacturer will be liable, and the NHTSA will likely investigate. So, the manufacturer will face serious issues if the L4 was not reliable enough.
 
He also claims that false positives are just a "comfort" issue not a safety issue, which everyone knows is false. Slamming on the brakes on the highway for an object that is not there absolutely causes a safety concern.

If the false positive would lead to an accident, like in your example, it would be included in the MTBF. So it would be a false positive that would be reduced as they improve the MTBF. And if you achieve a MTBF of 10M hours of driving, then it would be rare enough.
 
Last edited:
  • Like
Reactions: Terminator857
I still believe that an initial rollout will also include a hybrid of sorts, Driverless cars for most of the country won't happen for a long time if you strictly follow the levels. However, owners will want some capability to let the car handle driving but without the need to take over instantly like we have now. That will provide the driver with texting/videos/Zoom etc. just no sleeping and the regulators get a person in the drivers seat. I think a lot of people would want to pay $12k for this functionality even if it was restricted initially to highways. I think people just need to think out of the box and not adhere so rigidly to the autonomous driving levels.
 
I still believe that an initial rollout will also include a hybrid of sorts, Driverless cars for most of the country won't happen for a long time if you strictly follow the levels. However, owners will want some capability to let the car handle driving but without the need to take over instantly like we have now. That will provide the driver with texting/videos/Zoom etc. just no sleeping and the regulators get a person in the drivers seat. I think a lot of people would want to pay $12k for this functionality even if it was restricted initially to highways. I think people just need to think out of the box and not adhere so rigidly to the autonomous driving levels.
That's Level 3...
 
  • Like
Reactions: gearchruncher
If the false positive would lead to an accident, like in your example, it would be included in the MTBF. So it would be a false positive that would be reduced as they improve the MTBF. And if you achieve a MTBF of 10M hours of driving, then it would be rare enough.
The literal transcript from Youtube

what is the price to pay here the price to pay here that you can get a higher level of false negatives of false positives right yeah there isn't an object there and but if each system is built in a way in which for 10 000 hours of driving it doesn't make a mistake then those false positives at the end of the day would contribute only to some comfort level nuisance but not for safety on the other hand it will contribute a lot in terms of reducing false negatives because what is really scary about the perception system that it will miss an object you miss an object and you hit that object this is where you can create an accident on your own fault obviously you are more conservative that you need that you need to be it's a comfort issue it's not a safety issue right obviously negatives are the ultimate evil and you you do have to exactly you have to get those below a certain level but you can't tolerate very many false positive you don't have a product you can sell if it's constantly breathing for ghosts so the idea is not to take two crappy systems and and put them together right right if you take two crappy perception systems then you know you'll reduce the false negatives but then you are there you are increasing the false positives and now you have a a system that is so crappy that you know you cannot drive it but if you take each system has it 10 000 hours of 10 000 hours of mtvf and this is what we are aiming at not 1 000 hours 10 000 hours of mtbf each system it means that each system is pretty good it's not sufficiently it's not sufficiently good to remove the driver because we need more than 10 000 hours mtbf but now each of them is pretty is it's quite good and now when you do an or gate you may not get 10 to the power of a of eight but you'll get something much much better than 10 to the power of four

He pretty clearly doesn't treat any failure as an accident. Jumping between the imprecise language of MTBF, "false positives", "false negatives" and "mistake" is not helpful. He's assuming a false positive won't be an accident, but that a false negative will always be? Why not stay with the clean language of a "failure"? Not all false negatives or positives will lead to failure- and he's arguing that each system is 1:10,000 by itself to failure, and that's all that matters, and then after that the comfort the system drives people around is what matters. But the discussion that braking when you shouldn't is just a comfort issue every single time means you haven't really done a good analysis of your system in the real world. It also means that if you are assuming you can OR together two systems with unknown false positive rates, you have no idea if they might combine to create a worse "failure" rate than independently because maybe the false positives add together so much in independent ways that they take over for the combinational benefit of other failures due to false negatives.

A MTBF of 10M is not claiming statistical independence. Statistical independence would be 10,000*10,000 or 100M, not 10M. So, Mobileye is claiming a MTBF of 10x less than statistical independence. So I see no contradiction there.
He ends with "not power of 8, but better than power of 4." That does not read as if he's sure that they will get to 10^6.
 
I think a lot of people would want to pay $12k for this functionality even if it was restricted initially to highways.

That's Level 3...
And all Tesla is selling for $12K is a promise of Level 2... Level 3 to be priced in the future when it exists (or knowing Tesla, 5 years before they can actually ship it).

But yeah, as Daniel said, you literally described Level 3 while arguing that the levels are too prescriptive. But lots of people believe Tesla will never do Level 3, as it's too hard to understand how quickly the driver must switch from their non-driving tasks to driving, and no manufacturer will take on the liability for that. What is even the point of an autonomy system if all accidents are your fault?

Plus, doing this would require Tesla to start working with regulators, an they seem pretty darn focused on avoiding this for now and trying to get the much more difficult task of city streets driving working even if the reliability is awful and everything will be stuck L2 for a very long time.
 
He pretty clearly doesn't treat any failure as an accident. Jumping between the imprecise language of MTBF, "false positives", "false negatives" and "mistake" is not helpful. He's assuming a false positive won't be an accident, but that a false negative will always be? Why not stay with the clean language of a "failure"? Not all false negatives or positives will lead to failure- and he's arguing that each system is 1:10,000 by itself to failure, and that's all that matters, and then after that the comfort the system drives people around is what matters. But the discussion that braking when you shouldn't is just a comfort issue every single time means you haven't really done a good analysis of your system in the real world. It also means that if you are assuming you can OR together two systems with unknown false positive rates, you have no idea if they might combine to create a worse "failure" rate than independently because maybe the false positives add together so much in independent ways that they take over for the combinational benefit of other failures due to false negatives.

Yes, he seems to treat false negatives as safety issues but not false positives. That is because a false negative (not seeing something that is there) can result in a collision but seeing something that is not really there (false positive) will not result in a collision in front of the car since there is nothing there.

But he also clearly defines MTBF as failures that result in an accident. Your example where hard braking results in getting rear ended would be a "safety failure resulting in an accident" so it would be counted in the MTBF. So clearly, that would be a failure that Mobileye would reduce since it would affect the MTBF.
 
Are there videos of Mobileye's vision only and Lidar/Radar only systems both driving around doing L4 tasks? I'd think they would have this if they have any chance of being 1:1000 hours by the end of the year and 1:10000 in a few years.

I'd really like to know how they are reading lane lines, street markings, stop lights, speed limit signs, turn restrictions, etc with the Lidar/Radar only system. Or are they maybe not fully independent....??? It appears the only failure they care about is when they hit something, not when they just become confused, need to stop in the middle of the road, and don't actually complete the task.
 
Are there videos of Mobileye's vision only and Lidar/Radar only systems both driving around doing L4 tasks? I'd think they would have this if they have any chance of being 1:1000 hours by the end of the year and 1:10000 in a few years.

yes, we have videos of the vision-only system doing L4:




I don't think Mobileye has released any videos of the radar/lidar system doing L4.

I'd really like to know how they are reading stop lights, speed limit signs, turn restrictions, etc with the Lidar/Radar only system. Or are they maybe not fully independent....???

The radar/lidar subsystem uses 1 camera for traffic lights. But everything else is done by the radar/lidar subsystem. No, the two systems are not fully independent. I don't know why this is confusing. But the vision-only can do everything and the radar/lidar can do everything except traffic lights. So there is a lot of independence between the two systems, but it is not 100%.
 
  • Like
Reactions: Terminator857