Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Actually you said “the Same FSD Beta software we have”. Last I checked nobody here has that software.

Correct. We have V11. Maybe I misspoke. I was trying to say that there is one system called FSD Beta. There are two versions of that system, V11 and V12. Tesla is testing both V11 and V12. Both V11 and V12 are supervised FSD but Tesla is hoping to remove driver supervision in the future. I was really responding to @willow_hiller 's question about testing L4. I disagree with the implication that Tesla is testing a secret version of FSD that is L4 or unsupervised FSD. I believe V12 is currently L2 and supervised FSD. I think when Elon talks about unsupervised FSD, he is not talking about some secret L4 version but rather, he is simply talking about the safety stats of V11 and V12 if driver supervision were removed. Tesla is able to look at metrics of how V11 or V12 would perform without supervision to see how close they might be to removing driver supervision. So when Elon says that "unsupervised FSD is trending well", I think he just means that the trend of the safety stats is looking good for supervision to be removed at some point in the future. For example, maybe he is seeing fewer safety critical interventions with V12 compared to V11 or maybe the simulation is showing that if left unsupervised, V12 would have fewer accidents than V11?

I just want to know more about v11 safety. It would be hugely important if it is safer as currently implemented and utilized by the public.

But it is so frustrating that Tesla has never provided anything to help show the world how much safer the roads are as a result of v11 use.

Just such a difficult analysis to do - this may be why it has not happened.

Yes, I would love to know more.

In theory, supervised V11 could very well be safer than humans based on how Tesla measures safety:
1) Tesla only measures crashes with airbag deployment or active restraint deployed. So Tesla is not counting many safety issues like minor accidents or near misses.
2) V11 can handle routine driving and lane keeping well. So V11 is unlikely to have one vehicle accidents that humans have when they are distracted or impaired. And any serious accidents that V11 could cause are hopefully prevented by the driver if they are attentive enough. Assuming most FSD beta users are attentive, they should be able to prevent most serious crashes. So we should expect the number of serious crashes to decrease.

We should also consider that FSD beta users may not be using it in more risky situations. This could be skewering Tesla's stats, making it look safer than it really is. We don't know if V11 is objectively safer than humans. But the keyword is "supervised". If the human driver is attentive, supervised V11 could very well be safer as Tesla defines it. Unsupervised V11 is a whole other matter.

Certainly, if you only measure serious crashes and FSD can avoid many human accidents and the attentive human driver can prevent many FSD accidents then, in theory, we would expect the combination of FSD and an attentive driver to be safer than a human alone. In fact, this is the very argument Mobileye uses for why a well designed "eyes on" driver assist system can be safer than a human driver.
 
I disagree with the implication that Tesla is testing a secret version of FSD that is L4 or unsupervised FSD.

There's no need for there to be a secret version. I thought SAE autonomy level was largely about design intent, after a basic threshold of capabilities are met.

We know Tesla has the ability to disable driver monitoring requirements. If they did this for their safety drivers, and gave them instructions not to control the vehicle or disengage except in case of emergency, why would it still be considered level 2? Why couldn't it be considered level 4 with a safety driver?
 
There's no need for there to be a secret version. I thought SAE autonomy level was largely about design intent, after a basic threshold of capabilities are met.

We know Tesla has the ability to disable driver monitoring requirements. If they did this for their safety drivers, and gave them instructions not to control the vehicle or disengage except in case of emergency, why would it still be considered level 2? Why couldn't it be considered level 4 with a safety driver?
If they required a driver to be in the car to take over "in emergency situations", it wouldn't be level 4 by definition which "will not require a driver to take over".
 
So you're claiming that as soon as Cruise put safety drivers back in their cars, they were immediately demoted from L4 to L3?
It is is about design intent. While developing an L4 you will use a safety driver.
An L3 is a competely different solution, so that it is not. It's an L4 under development because Cruise says so.

According to Tesla communication with authorities "FSD city streets" has a Level 2 design intent.
 
So you're claiming that as soon as Cruise put safety drivers back in their cars, they were immediately demoted from L4 to L3?
It's stated in level 4 that a driver is not required to take over. If a driver is required to sit in a driver's seat and be ready to take over, it absolutely won't be level 4. If they are required to monitor the car at all times like you suggested, it's level 2 by definition.
 
According to Tesla communication with authorities "FSD city streets" has a Level 2 design intent.

That's Tesla's communications with the California DMV. They could be testing higher levels of autonomy outside of California.

It's stated in level 4 that a driver is not required to take over. If a driver is required to sit in a driver's seat and ve ready to take over, it absolutely won't be level 4. If they are required to monitor the car at all times like you suggested, it's level 2 by definition.

You're confusing the design intent with the practical operation. It could be a system designed such that no driver is needed, but that system could fail, at which point having a safety driver is a good safety measure. The SAE levels don't require that the system operate flawlessly.
 
  • Like
Reactions: Terminator857
That's Tesla's communications with the California DMV. They could be testing higher levels of autonomy outside of California.



You're confusing the design intent with the practical operation. It could be a system designed such that no driver is needed, but that system could fail, at which point having a safety driver is a good safety measure. The SAE levels don't require that the system operate flawlessly.
I disagree. If a company requires a driver to sit in the driver's seat ready to take over and to monitor the road, it's not a level 4 system. Tesla is aiming for level 4, but we have level 2 while it's training and developing.
 
Nothing you said in your post invalidates what I said.
Design intent.
If a safety driver is training level 4, it's not level 4 while they are there to take over.
But it is. The design intent is Level 4. It's just not a product yet. An L4 under development is not an L2. Its still an L4 that isnt yet in production.
 
  • Like
Reactions: JB47394
But it is. The design intent is Level 4. It's just not a product yet. An L4 under development is not an L2.
Completely disagree. Teslas design intent is level 4/5, but it's level 2. When Waymo and Cruise have drivers sitting behind the wheel making corrections, they don't even call it level 4. They say safety drivers are helping to train the new area....then when the drivers are removed they point to level 4.
 
Completely disagree. Teslas design intent is level 4/5, but it's level 2. When Waymo and Cruise have drivers sitting behind the wheel making corrections, they don't even call it level 4. They say safety drivers are helping to train the new area....then when the drivers are removed they point to level 4.
Teslas design intent is not 4/5. That's Tesla's marketing.

Level 5 isn't possible. Level 4 requires an ODD definition. FSDb doesn't have an ODD nor a L4 technical design intent.
It's clearly a Level 2 system, since consumers cant be regarded as "safety drivers" and it has a DMS.

As people will learn to understand eventually is that you don't transition from a L2 design intent to an L4 design intent. There are different requirements on hardware, software. You design for L4. Ingress/egress/UX is completely different apart from the reliability requirements.
 
Last edited:
Teslas design intent is not 4/5. That's Tesla's marketing.

Level 5 isn't possible. Level 4 requires an ODD definition. FSDb doesn't have an ODD nor a L4 technical design intent.
I disagree with all of this. You are pushing your opinion. You continue to cite what Tesla provides to the California DMV without context.

Level 5 isn't possible now, but making an absolute statement like that is ridiculous without the "now" qualifier. Just like when you said level 4 with just cameras is impossible and try back it with "show where it's done". It's like you pretend innovation and advancement aren't real.
 
  • Like
Reactions: Yelobird
Teslas design intent is not 4/5. That's Tesla's marketing.

Level 5 isn't possible. Level 4 requires an ODD definition. FSDb doesn't have an ODD nor a L4 technical design intent.
It's clearly a Level 2 system, since consumers cant be regarded as "safety drivers" and it has a DMS.

As people will learn to understand eventually is that you don't transition from a L2 design intent to an L4 design intent. There are different requirements on hardware, software. You design for L4. Ingress/egress/UX is completely different apart from the reliability requirements.

What started this conversation was Elon saying they have a measure of how "unsupervised FSD" was progressing. These measurements must be collected through simulation, or by testing the system with minimum human intervention through safety drivers.

If FSD is L2 design intent, why are they measuring the progress of the system without human supervision?
 
I disagree with all of this. You are pushing your opinion. You continue to cite what Tesla provides to the California DMV without context.
I'm not "pushing my opinion".

A driver assistance system is not a robotaxi and a robotaxi is not a driver assistance system. The Robotaxi isn't designed to operate with a driver at all. Design intent matters.

Are you seriously claiming that a Cruise robotaxi with a safety operator is an L2?
 
I'm not "pushing my opinion".

A driver assistance system is not a robotaxi and a robotaxi is not a driver assistance system. The Robotaxi isn't designed to operate with a driver as all. Design intent matters.

Are you seriously claiming that a Cruise robotaxi with a safety operator is an L2?
This is such a myopic leading post.

Tesla is currently level 2, but is developing a level 4 system. That's the intent. The level 2 is a step to get to level 4. Whether they ever get there or not is not relevant to this particular discussion.

When Cruise has a driver and they are taking over, making corrections, being forced to monitor the road, it's absolutely not a level 4 system at the time by any definition of level 4.
 
If there is a L4 design intent, why isn't it subject to the same rules and regulations as Waymo and Cruise wrt employed safety operators and reporting of all accidents and disengagement reports?

Not all states have the same reporting requirements as California. It's perfectly legal to test autonomous vehicles with no registration and no reporting in many US states.
 
  • Like
Reactions: Terminator857