Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Not all states have the same reporting requirements as California. It's perfectly legal to test autonomous vehicles with no registration and no reporting in many US states.
Actually most states. I think only like 2 have any sort of requirements.

Regardless, Tesla announcing to California DMV that it's level is to provide them more leeway in a state that has restrictions while they develop their level 4+ solution. Citing that level 2 is the end game is disingenuous.
 
  • Like
Reactions: willow_hiller
Regardless, Tesla announcing to California DMV that it's level is to provide them more leeway in a state that has restrictions while they develop their level 4+ solution. Citing that level 2 is the end game is disingenuous.

It's absolutely gaming the system. I agree with you on that. It's part of the reason why the "design intent" aspect of the SAE levels is somewhat problematic. Tesla can claim their design intent is L2 to California, regardless of the system capabilities. Likewise, you have some very poorly performing AV companies claiming their design intent is L4, even though they require interventions every couple of miles.
 
  • Like
Reactions: uscbucsfan
So why are Cruise and anyone else required to have a DMV testing permit if they're an L2 with a safety operator?
See my post above. They are using the safety driver to train the system to be level 4, but it's not level 4 when someone is driving.

Again, you are getting confused with semantics. Tesla could have a level 5 system that's operated as level 1 or level 2. Cruise when they have a safety driver is a level 4 system, but it's being operated as level 2 when they have a safety driver


The original point is that Tesla wouldn't be able to claim level 4 automation if they always require a driver to take over. Just like Cruise wouldn't ever be level 4 if they couldn't remove that safety driver.
 
There's no need for there to be a secret version. I thought SAE autonomy level was largely about design intent, after a basic threshold of capabilities are met.

Design intent is only part of the definition. Design intent relates to what level of autonomy the design team is aiming for. But design intent alone does not define the level of autonomy. Design intent does not mean that the manufacturer can just claim whatever level they want. Well, I guess they could try but could face lawsuits if found to be deceptive.

The fact is that each SAE level has a specific definition that the system must meet to be that level. For example L4 is defined as "the sustained and ODD-specific performance by an ADS of the entire DDT and DDT fallback." So a system needs to meet this definition to be L4.

Here is a simplified logic flow diagram from the SAE to help determine the level of autonomy of a feature:

DsU1y48.png


We know Tesla has the ability to disable driver monitoring requirements. If they did this for their safety drivers, and gave them instructions not to control the vehicle or disengage except in case of emergency, why would it still be considered level 2? Why couldn't it be considered level 4 with a safety driver?

To be fair, advanced L2 hands-free and L4 with a safety driver can appear on the surface to be similar. Both might drive from A to B without a human intervention. Is FSD beta L2 or is it L4 with a safety driver? I think Tesla's messaging on this has been confusing. On one hand, Tesla told the CA DMV that it is L2 but on the other hand they call it "full self-driving" which implies L4. And Elon talks about how FSD will soon not require driver supervision which seems to imply that it is L4 with a safety driver.

I am going on the SAE definition of L4: "the sustained and ODD-specific performance by an ADS of the entire DDT and DDT fallback."

Does FSD beta meet this definition? No, it does not. Therefore, it cannot be L4. If FSD beta could perform its own fallback, then it would meet the definition of L4. IMO, based on the logic flow diagram above, since FSD beta requires a human to perform the DDT fallback, it would seem to fit the definition of L3 more than L4.
 
Likewise, you have some very poorly performing AV companies claiming their design intent is L4, even though they require interventions every couple of miles.
Once a system in made available to the public (as a one time fee purchase, SaaS-sub or pay as you go rideshare) we know what properties/capabilities it has, and it needs to be judged accordingly. If you can sleep in the backseat when using the product, it's an L4 (safety operator or not). Otherwise it is not. The customer is never a "safety driver/operator". That's just "the driver".
 
If FSD beta could perform its own fallback, then it would meet the definition of L4. IMO, based on the logic flow diagram above, since FSD beta requires a human to perform the DDT fallback, it would seem to fit the definition of L3 more than L4.

I thought coming to a stop and putting on the hazards is a sufficient fallback? FSD is definitely capable of doing that. On the version of FSD Beta with driver monitoring, it only does it in response to no driver feedback (and I've personally had it do it once when it gave me the "Take over immediately" message and I took a few seconds to press the accelerator). I think it's likely it would perform the same behavior when it detects it cannot continue for when driver monitoring is disabled.
 
  • Informative
Reactions: EVNow
I thought coming to a stop and putting on the hazards is a sufficient fallback? FSD is definitely capable of doing that. On the version of FSD Beta with driver monitoring, it only does it in response to no driver feedback (and I've personally had it do it once when it gave me the "Take over immediately" message and I took a few seconds to press the accelerator). I think it's likely it would perform the same behavior when it detects it cannot continue for when driver monitoring is disabled.
Fallback is a process with it's own UX and is never "take over immediately". The system needs to hand over the OEDR to the human safely. Until the fallback is completed the system needs to be capable of doing 100% of the DDT.

L3 is "eyes off" stay in driver's seat (for when the system exits the ODD and performs the fallback). L4 is sleep in the back seat.
 
Fallback is a process with it's own UX and is never "take over immediately". The system needs to hand over the OEDR to the human safely. Until the fallback is completed the system needs to be capable of doing 100% of the DDT.

L3 is "eyes off" stay in driver's seat (for when the system exits the ODD). L4 is sleep in the back seat.

Right. I was describing how the version of FSD Beta with driver monitoring operates right now. It's capable of coming to a stop and putting the hazards on.

For Tesla employees testing FSD without driver monitoring, it's not a stretch that it could choose to stop and put on the hazards without demanding driver input.
 
Right. I was describing how the version of FSD Beta with driver monitoring operates right now. It's capable of coming to a stop and putting the hazards on.

For Tesla employees testing FSD without driver monitoring, it's not a stretch that it could choose to stop and put on the hazards without demanding driver input.
This is the whole "6-12 months" reasoning basically? I mean, not likely. I gave up on that 2-3 years ago.

There has been zero evidence or properties of FSDb that makes it anything else than an L2. As it lacks an ODD-definition, why would it just stop and need driver input when it gladly veers into oncoming traffic randomly?

The gap to autonomy is incredibly high (and again, it needs to be built in a completely different manner imho).
 
I thought coming to a stop and putting on the hazards is a sufficient fallback? FSD is definitely capable of doing that. On the version of FSD Beta with driver monitoring, it only does it in response to no driver feedback (and I've personally had it do it once when it gave me the "Take over immediately" message and I took a few seconds to press the accelerator). I think it's likely it would perform the same behavior when it detects it cannot continue for when driver monitoring is disabled.
This is the reason Levels are stupid. Without a set of minimum quality requirements its not useful.

A simple electrical switch has quality requirements- but apparently not a robocar.
 
  • Like
Reactions: willow_hiller
I thought coming to a stop and putting on the hazards is a sufficient fallback? FSD is definitely capable of doing that. On the version of FSD Beta with driver monitoring, it only does it in response to no driver feedback (and I've personally had it do it once when it gave me the "Take over immediately" message and I took a few seconds to press the accelerator). I think it's likely it would perform the same behavior when it detects it cannot continue for when driver monitoring is disabled.

No, it is not a sufficient fallback. SAE says L4 must be capable of the "entire DDT fallback". This means the system needs to know 1) when a fallback is needed 2) determine which fallback is needed, either pulling over to the side or stopping in the lane and 3) performing the fallback without human intervention. So simply stopping and putting on the hazards when the driver is unresponsive is not enough.
 
  • Like
Reactions: spacecoin
There has been zero evidence or properties of FSDb that makes it anything else than an L2. As it lacks an ODD-definition, why would it just stop and need driver input when it gladly veers into oncoming traffic randomly?

The gap to autonomy is incredibly high (and again, it needs to be built in a completely different manner imho).

We don't know the internal details of how Tesla is developing and testing FSD. All we know is that they hire professional test drivers, and now this tweet saying they are measuring the unsupervised performance somehow. Everything else we have is just hints.

For e.g. we can infer that the system is actively measuring its confidence, and likelihood of disengagement. The only clue we had about this system before was through the Teslascope account, but you can see one aspect of it in operation in the voice reporting system. When I disengage when the system is driving fine, I will always get the prompt to leave a voice message to explain why I disengaged. But in the latest version, if it does something like suddenly trying to veer into another lane prior to my disengagement, it does not ask for an explanation. FSD is able to recognize scenarios where it has made a mistake, and does not ask for an explanation in those scenarios.
 
We don't know the internal details of how Tesla is developing and testing FSD. All we know is that they hire professional test drivers, and now this tweet saying they are measuring the unsupervised performance somehow. Everything else we have is just hints.

For e.g. we can infer that the system is actively measuring its confidence, and likelihood of disengagement. The only clue we had about this system before was through the Teslascope account, but you can see one aspect of it in operation in the voice reporting system. When I disengage when the system is driving fine, I will always get the prompt to leave a voice message to explain why I disengaged. But in the latest version, if it does something like suddenly trying to veer into another lane prior to my disengagement, it does not ask for an explanation. FSD is able to recognize scenarios where it has made a mistake, and does not ask for an explanation in those scenarios.
Occam's razor? The most likely explanation is that FSDb is an L2 that's sold with questionable marketing.

"If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family Anatidae on our hands" -- Douglas Adams.
 
Because Cruise is applying for the drag pedestrian L2.45 version license.
Cruise has made a complete mockery of not only the levels but also the entire regulatory regime.

The biggest issue with the regulatory regime has always been the regulators cozying up to the regulated. They have to be just wined and dined to get favorable regulations, I guess. Ofcourse the regulated have spent enough money for propaganda that says regulations are unnecessary and an impediment that large and influential section of voters believe that.
 
Cruise has made a complete mockery of not only the levels but also the entire regulatory regime.
How is this hard to understand?

If you can sleep in the backseat when using the product, it's an L4 (safety operator or not). Otherwise it is not. The customer is never a "safety driver/operator". She's either a passenger (L4) or just "the driver" of an L2.

Cruise is an L4, and always said they're building an L4 and they are marketing it as a robotaxi service within a limited geo (an ODD). How's that "a mockery of the levels"?
 
Last edited:
and now this tweet saying they are measuring the unsupervised performance somehow

No, the tweet did not say that. Elon just said some stuff.

But in the latest version, if it does something like suddenly trying to veer into another lane prior to my disengagement, it does not ask for an explanation. FSD is able to recognize scenarios where it has made a mistake, and does not ask for an explanation in those scenarios.
This is not correct. I think we can say that sometimes it does not ask for an explanation. But not much more.

Wasn’t there something about not asking for feedback when close to the destination? Vague recollections of release notes or tweet or something.
 
  • Funny
Reactions: willow_hiller
No, the tweet did not say that. Elon just said some stuff.

Included in the "stuff" was the sentence "Unsupervised FSD is trending well." In order for it to be trending at all, it needs to exist in some form, and it needs to have its performance measured.

You're free to argue that Elon is lying, if that's what you mean. But if he's not, I don't see any other interpretation.
 
Included in the "stuff" was the sentence "Unsupervised FSD is trending well." In order for it to be trending at all, it needs to exist in some form, and it needs to have its performance measured.

You're free to argue that Elon is lying, if that's what you mean. But if he's not, I don't see any other interpretation.
That's a given. If he's talking about FSD, he's lying. :)
 
  • Disagree
Reactions: lowtek
Included in the "stuff" was the sentence "Unsupervised FSD is trending well." In order for it to be trending at all, it needs to exist in some form, and it needs to have its performance measured.
It’s not quantified and in any case it’s a content-free statement so I don’t think we can say they are doing anything new.

I assume they are measuring just as they have always done - through simulation (v12 and v11) and fleet data (v11).