Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why no FSD 'leaks' from Elon?

This site may earn commission on affiliate links.
SAE definitions are bad for gauging progress.

Once Tesla achieves and releases feature complete, they will still be level 2 cars.
There really isn't any good way to gauge progress until companies actually deploy L3-L5. Once they do that it will be much easier because we will be able to look at accident rate and severity.
Right now the only metric we have is disengagement rate by safety drivers which has all sorts of problems. Here's a blog post by the CEO of Cruise about it: The Disengagement Myth
 
There really isn't any good way to gauge progress until companies actually deploy L3-L5. Once they do that it will be much easier because we will be able to look at accident rate and severity.
Right now the only metric we have is disengagement rate by safety drivers which has all sorts of problems. Here's a blog post by the CEO of Cruise about it: The Disengagement Myth

The disengagement rate can be imprecise because there are a lot of parameters that affect it from location, driving conditions, time of day, to how conservative the safety driver is. Context matters. The disengagement rate in rush hour traffic in downtown LA will mean something different than the same disengagement rate on an empty highway through Utah. So it is hard to measure disengagement rates completely on the same level. At the end of the day, no, you can't use disengagement rates as an exact score. If my disengagement rate is 1000 miles per disengagement and yours is 1200 miles per disengagement, that does not mean that your FSD is 20% better than mine.

But if you measure disengagement rates in a big enough sample and from similar conditions, I think they can provide a general qualitative feel for your FSD. For example, if I do 1M autonomous miles and my disengagement rate is 40 miles per disengagement, then it is probably safe to say that my FSD is probably not very reliable. And if we both do 1M miles of autonomous driving in LA, and my disengagement rate is 100 miles per disengagement and your disengagement is 10,000 miles per disengagement, then I think it is safe to assume that your FSD is probably better than mine. How much better is hard to quantify but qualitatively, it is probably better. The disengagements rates are far enough apart that any differences from varying conditions or different safety drivers probably won't change the final result.

More importantly, the causes of disengagements are much more informative than the disengagement rate IMO. For example, it is quite telling if you disengaged because of a new construction zone, because an unprotected left turns was taking too long, to avoid hitting a pedestrian or because your perception failed to detect a stopped truck in the middle of the road. That does give you concrete info on what your FSD is struggling with. So I think knowing the causes of disengagement rates is valuable information.
 
Last edited:
Something else to consider is, the balance between safety / timid driving and more natural human like driving. I can imagine a situation where you get fewer disengagements, but the car drives like my grandmother. Conversely, a more natural driving style may require more disengagements, but ultimately would be preferable.
 
  • Like
Reactions: diplomat33
Something else to consider is, the balance between safety / timid driving and more natural human like driving. I can imagine a situation where you get fewer disengagements, but the car drives like my grandmother. Conversely, a more natural driving style may require more disengagements, but ultimately would be preferable.

Yes. Getting autonomous cars to drive more naturally and less like robots is a key challenge.
 
Something else to consider is, the balance between safety / timid driving and more natural human like driving. I can imagine a situation where you get fewer disengagements, but the car drives like my grandmother. Conversely, a more natural driving style may require more disengagements, but ultimately would be preferable.
My wild guess is maximizing safety and reducing disengagements will be the top priorities for autonomous cars as long as unpredictable humans are allowed on the road.

For a long time into the future, if you are in a hurry, an autonomous ride won't be your best choice.
 
Tomato tomato, the press got fsd demo drives. Many of the demo drives had one or more interventions. Some had none.

SAE definitions suck. They aren't good at gauging progress.

I wouldn't say they suck as much as they haven't aged well.

But, for the sake of argument here is why they "suck".

The reasons SAE's definitions suck is that they include L3. The problem with L3 is that it requires a driver at the ready where they can re-engage within a fairly small amount of time. It assumes the driver is at the ready to re-take over like texting or reading a book. But, we know perfectly well that some percentage of drivers will be completely out of it. How's the car supposed to deal with this? It doesn't really have to as the liability has been transferred. The difficulty of this transfer of liability likely means that we'll never see L3 systems beyond slow speed traffic assist systems. Sure traffic assist systems are useful, but they fall very short of what we want.

The other reason they suck is they expect a human being to oversee advanced L2 systems. The Europeans realized this was a horrible idea, and put a stop to allowing an L2 car to do so much. It remains to be seen if any manufacture can do a NoA like thing in Europe.

Lastly they suck because of the mythical L5. The only reasons humans are L5 drivers is we accept higher risks. So we'll take chances on road conditions, or unfamiliarity.

Due to the liability being transferred I'm not sure we'll ever see a L5 vehicle in our lifetime. There will always be some circumstance or condition that warrants some form of geo-fencing.

If geo-fencing is used then it's L4.

So why even have L5 aside from entertainment? Like big foot or little green aliens. It's fun to talk about, but not entirely useful.

I think it was originally needed because of the whole question regarding steering controls. But, my understanding is steering controls can be removed from an L4 vehicle.

L2 and L4 are all that really matters.
 
Lastly they suck because of the mythical L5. The only reasons humans are L5 drivers is we accept higher risks. So we'll take chances on road conditions, or unfamiliarity.

Due to the liability being transferred I'm not sure we'll ever see a L5 vehicle in our lifetime. There will always be some circumstance or condition that warrants some form of geo-fencing.

If geo-fencing is used then it's L4.

So why even have L5 aside from entertainment? Like big foot or little green aliens. It's fun to talk about, but not entirely useful.

I think it was originally needed because of the whole question regarding steering controls. But, my understanding is steering controls can be removed from an L4 vehicle.

L2 and L4 are all that really matters.

I don't think L5 is mythical per se. There is actually a logical reason for including L5 that has nothing to do with steering controls. You are right that L4 can remove all steering and pedals. The reason for L5 is that in terms of "features", L4 already has it all. L4 is fully autonomous in terms of what it is capable of doing (responding to lights, making turns, avoiding hazards etc). But it is geofenced. So you still need one more level. But what's higher than a car that is already fully autonomous but geofenced? Answer: A fully autonomous car that is not geofenced. So L5 is a logical capstone.

And I do think L5 is a useful level to have. Certainly, knowing if your autonomous is limited in where or when it can be used (L4) or if it can work on every road and any time (L5) would be good to know.

It is worth noting that the SAE levels only deal with capability (what the ADS is capable of doing) not reliability (how good is the ADS, how safe is it). Questions of safety and reliability are left to the manufacturers of the ADS. And we are seeing companies try to tackle this question of what is acceptable safety for their autonomous cars by coming up with their own internal metrics for when they will consider their robotaxis safe enough for full deployment on public roads. How risk averse a company is, may play a big role in whether they do L4 or L5. It is just exponentially harder to make L5 safe enough than it is to make L4 safe enough. It's not worth the effort when L4 can achieve your business goals with far less effort and cost. So there is no real need to do L5. So I think for most companies, especially those doing ride-hailing, L4 will be more than good enough.
 
The reasons SAE's definitions suck is that they include L3. The problem with L3 is that it requires a driver at the ready where they can re-engage within a fairly small amount of time. It assumes the driver is at the ready to re-take over like texting or reading a book. But, we know perfectly well that some percentage of drivers will be completely out of it. How's the car supposed to deal with this? It doesn't really have to as the liability has been transferred. The difficulty of this transfer of liability likely means that we'll never see L3 systems beyond slow speed traffic assist systems. Sure traffic assist systems are useful, but they fall very short of what we want.
Companies are still planning on releasing Level 3 systems so it makes sense to describe them in the SAE standard (which is the basis for current regulations). I am also a Level 3 skeptic but because I think that it may be only slightly easier than a Level 4 system.
Lastly they suck because of the mythical L5. The only reasons humans are L5 drivers is we accept higher risks. So we'll take chances on road conditions, or unfamiliarity.
Level 5 systems are not required to drive in all conditions that the least prudent driver would.
Tesla is releasing a Level 5 system next year! :p
 
  • Funny
Reactions: S4WRXTTCS
If they declare their intent as L3 then it is L3.
I don't see intent written in the regulations.
https://www.dmv.ca.gov/portal/uploads/2020/06/Adopted-Regulatory-Text-2019-1.pdf
CA Autonomous regulations said:
1)An autonomous test vehicle does not include vehicles equipped with one or more systems that provide driver assistance and/or enhance safety benefits but are not capable of, singularly or in combination, performing the dynamic driving task on a sustained basis without the constant control or active monitoring of a natural person.
Regulations 1 says if it is not capable of level 3 then it is not autonomous vehicle. Elon says driver monitoring is required, so it is not an autonomous test vehicle per CA regulations.
 
I don't see intent written in the regulations.
https://www.dmv.ca.gov/portal/uploads/2020/06/Adopted-Regulatory-Text-2019-1.pdf

Regulations 1 says if it is not capable of level 3 then it is not autonomous vehicle. Elon says driver monitoring is required, so it is not an autonomous test vehicle per CA regulations.
It is.
(2) For the purposes of this article, an “autonomous test vehicle” is equipped with technology that makes it capable of operation that meets the definition of Levels 3, 4, or 5 of the SAE International's Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles, standard J3016 (SEP2016), which is hereby incorporated by reference.
From SAE International's Taxonomy and Definitions for Terms Related to Driving Automation Systems:
The level of a driving automation system feature corresponds to the feature’s production design intent. This applies regardless of whether the vehicle on which it is equipped is a production vehicle already deployed in commerce, or a test vehicle that has yet to be deployed. As such, it is incorrect to classify a level 4 design-intended ADS feature equipped on a test vehicle as level 2 simply because on-road testing requires a test driver to supervise the feature while engaged, and to intervene if necessary to maintain safe operation.
 
Clause 1 and clause 2 conflict.
Clause 1: it is not because it is not capable
Clause 2: it is because that is the intention many years down the road.
I know which one will win in court. Clause 2 won't fly because they are talking about short term intention and not x years down the road.
 
  • Funny
Reactions: Daniel in SD
Clause 1 and clause 2 conflict.
Clause 1: it is not because it is not capable
Clause 2: it is because that is the intention many years down the road.
I know which one will win in court. Clause 2 won't fly because they are talking about short term intention and not x years down the road.
Uber tried that argument. It didn't work. Maybe they should have taken the DMV to court. haha.
 
  • Like
Reactions: DanCar
Uber tried that argument. It didn't work. Maybe they should have taken the DMV to court. haha.
I don't think uber has a leg to stand on, since it is not intended to be a driver assist system and oenhance safety, and solely intended as an automation system. So I'll amend my previous comment and say it matters if it is being used as a driver assist system that enhances safety.
 
This begs the question, how does a cop ticket an autonomous car breaking the HOV rules, or any rules for that matter?

Some type of technology will have to be in place to allow the cop to know who the responsible party is.

If it's in autonomous mode its probably best to leave the car be, and simply submit an infraction report to the company in charge of it. Like Waymo for example.

If it's in manual mode the cop has the option to pull the car over or simply submit the ticket electronically (if we move to an all electric license/insurance/etc system).
 
I don't think L5 is mythical per se. There is actually a logical reason for including L5 that has nothing to do with steering controls. You are right that L4 can remove all steering and pedals. The reason for L5 is that in terms of "features", L4 already has it all. L4 is fully autonomous in terms of what it is capable of doing (responding to lights, making turns, avoiding hazards etc). But it is geofenced. So you still need one more level. But what's higher than a car that is already fully autonomous but geofenced? Answer: A fully autonomous car that is not geofenced. So L5 is a logical capstone.

And I do think L5 is a useful level to have. Certainly, knowing if your autonomous is limited in where or when it can be used (L4) or if it can work on every road and any time (L5) would be good to know.

It is worth noting that the SAE levels only deal with capability (what the ADS is capable of doing) not reliability (how good is the ADS, how safe is it). Questions of safety and reliability are left to the manufacturers of the ADS. And we are seeing companies try to tackle this question of what is acceptable safety for their autonomous cars by coming up with their own internal metrics for when they will consider their robotaxis safe enough for full deployment on public roads. How risk averse a company is, may play a big role in whether they do L4 or L5. It is just exponentially harder to make L5 safe enough than it is to make L4 safe enough. It's not worth the effort when L4 can achieve your business goals with far less effort and cost. So there is no real need to do L5. So I think for most companies, especially those doing ride-hailing, L4 will be more than good enough.

I bet it’s a pretty good probability that most of us would be completely happy with L3 for our FSD $$$$$ already paid even if L4 much less L5 takes years to come along.