Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Included in the "stuff" was the sentence "Unsupervised FSD is trending well." In order for it to be trending at all, it needs to exist in some form, and it needs to have its performance measured.

You're free to argue that Elon is lying, if that's what you mean. But if he's not, I don't see any other interpretation.
As we have seen there is an Elon unsupervised version but nobody knows who or how many are testing it.
 
  • Funny
Reactions: AlanSubie4Life
Just removing the nag and DMS as “Elon mode” in the code does, doesn’t make the system better, nor unsupervised.
And like Most of your posts you know this with certainty because you said so? Lol. For someone not in the US that doesn’t have FSDb you are certainly the self appointed (in your mind) validator of all.
 
  • Funny
Reactions: EVNow
lol and that means what exactly in answering you original comment? Not sure how much they pay you for FUD but you certainly work for it.
Would you say that you experience:
a) Fear
b) Uncertainty
c) Doubt
d) All of the above

If you have any sources for your idea that "Elon mode" is anything more than "turn off the DMS and nags" that Green found in V11 or that V12 would have better performance than V11 is such a way that it would be unsupervised, feel free to post some sources.
 
  • Like
Reactions: AlanSubie4Life
There is a reason the poster you are replying to is on my ignore list .... and I think I've just 2 or 3 on that list.

Sorry for warping your fragile little mind.

 
Where are Community notes when you need them?

See Safety Argument Considerations for Public Road Testing of Autonomous Vehicles:

TLDR; Supervised autonomy isn't getting to "safer than a human" by having a human when the system gets from hardly working to unsafe..

View attachment 993221
Seems like you should at least include the rest of the figure.
1700861870725.png
 
Seems like you should at least include the rest of the figure.
View attachment 993564

So 10x human safety is another 10-20 years away?

Even with a log scale, I'm not sure the vertical scale makes much sense or that Supervisability and Autonomy Safeties are additive or even smooth functions. More likely they are step-like functions related to L2, L3, ... I would think Supervisability Safety would need to be greater or equal to a Normal Human Driver's Safety as long as Autonomy Safety is less than a Normal Human's Safety and especially when Autonomy has FSD-like safety/performance. But it sure is a pretty chart.
 
Last edited:
“Unsupervised FSD is trending well.” Where exactly? In simulations? On a secret test track somewhere? Because AFAIK there is no unsupervised FSD on any road anywhere.

Elon is probably talking about simulations. Tesla can collect data on disengagements and play them out in simulations to see what FSD would have done had the driver not disengaged. From that, they can analyze how FSD would perform if it were unsupervised.
 
  • Like
Reactions: edseloh and Supcom
Elon is probably talking about simulations. Tesla can collect data on disengagements and play them out in simulations to see what FSD would have done had the driver not disengaged. From that, they can analyze how FSD would perform if it were unsupervised.
Again, he stated they Are testing it with Real Tesla hired drivers in multiple world locations. Not to say that is the only data but also not just simulations as you keep assuming.
 
Seems like you should at least include the rest of the figure.
You should read the report and the other studies over the last 60 years that show that human suck at this type of monitoring and will increasingly space out as the system improves, while the system is not yet near human levels of performance.

Automation complacency is a fact. Elon’s statement is marketing.
 
Last edited:
Again, he stated they Are testing it with Real Tesla hired drivers in multiple world locations. Not to say that is the only data but also not just simulations as you keep assuming.

If you look at an earlier post before the one you quoted, I did not say it was only simulations. Tesla does real world testing with safety drivers to get disengagement data and scenarios. Tesla can analyze that real world disengagement data to see where FSD needs to improve. But the fact is that you need to test what FSD would have done if the disengagement had not occurred. That is because you need to know if the disengagement was really needed or not, could FSD have been able to handle the situation if it were unsupervised. That is how you know if/when unsupervised FSD is safe enough. Simulation from disengagement data is a good way to test the safety of unsupervised FSD. And it is better to have simulated crashes than real world crashes to test FSD safety.
 
Last edited:
  • Like
Reactions: edseloh and dtdtdt