Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
60 permit holders in California (with safety driver).
  1. AIMOTIVE INC
  2. AMBARELLA CORPORATION
  3. APEX.AI
  4. APPLE INC
  5. ARGO AI, LLC
  6. ATLAS ROBOTICS, INC
  7. AURORA INNOVATION
  8. AUTOX TECHNOLOGIES INC
  9. BAIDU USA LLC
  10. BMW
  11. BOSCH
  12. BOX BOT INC
  13. CHANGAN AUTOMOBILE
  14. CONTINENTAL
  15. CRUISE LLC
  16. CYNGN INC
  17. DEEPROUTE.AI
  18. DELPHI
  19. DiDi RESEARCH AMERICA, LLC
  20. EASYMILE
  21. FORD
  22. GATIK AI INC
  23. HELM.AI INC
  24. HONDA
  25. IMAGRY INC
  26. INCEPTIO TECHNOLOGY INC
  27. INTEL CORPORATION
  28. KAIZR, INC.
  29. LEONIS TECHNOLOGIES
  30. LYFT, INC
  31. MANDO AMERICA CORP
  32. MERC BENZ
  33. NAVYA INC
  34. NIO USA INC.
  35. NISSAN
  36. NURO, INC
  37. NVIDIA CORPORATION
  38. PHANTOM AI
  39. PLUSAI, INC
  40. PONY.AI
  41. Qcraft.ai
  42. QUALCOMM TECHNOLOGIES, INC
  43. RENOVO.AUTO
  44. RIDECELL INC
  45. SF MOTORS INC
  46. SUBARU
  47. TELENAV, INC.
  48. TESLA
  49. THORDRIVE, INC.
  50. TOYOTA RESEARCH INSTITUTE
  51. UATC, LLC (UBER)
  52. UDACITY
  53. Udelv, Inc
  54. VALEO NORTH AMERICA, INC.
  55. VOLKSWAGEN
  56. VOYAGE AUTO INC
  57. WAYMO LLC
  58. WeRide Corp DBA WeRide AI
  59. XMOTORS.AI, INC
  60. ZOOX INC
5 without a safety driver
  1. AUTOX TECHNOLOGIES INC
  2. Cruise LLC
  3. NURO, INC
  4. WAYMO LLC
  5. ZOOX, INC

Yeah but I would argue that most of those companies are not viable contenders because their FSD is not good enough.

If you look at the disengagement rates below, we see that there are a few companies with excellent to good disengagement rates and then a lot with terrible disengagement rates.

en_disengagement-report_part_2019.pn_.png


The 5 companies that have permits to deploy without a safety driver would be the viable ones IMO since being able to remove the safety driver shows promise that they have good FSD.

PS: Tesla is not on the graph because in 2019 they only reported 12 miles and 0 disengagements (the demo on Autonomy Day).

I am hoping with FSD Beta, Tesla will soon report their disengagements and we can put Tesla on this graph to compare with other companies.

PS: Baidu is suspicious because they reported an insane jump in disengagement rate. They went from 40 miles per disengagement in 2017 to 18,000 miles per disengagement in 2019.
 
Last edited:
But how do we know how close Tesla is to L4? How do you propose we measure how close Tesla is to L4? Watching videos from Early Access does not cut it.
Watch and learn.
You really act like you've not been in the ecosystem and seen how traffic light and stop sign recognition improved from "silly" to "impressive" with successive releases.

Same will happen with FSD.
Then they will add an additional data point in their quarterly safety report and show that those who have the release and engaged FSD are even less likely to be in accidents per mile driven.

Something along those lines.
 
You really act like you've not been in the ecosystem and seen how traffic light and stop sign recognition improved from "silly" to "impressive" with successive releases.

Same will happen with FSD.

Yup, I think Tesla has put all its eggs in the FSD basket. The current beta is capable of almost every human-like manuever. It just needs to be refined, which we've seen done with the traffic control feature.
 
Watch and learn.
You really act like you've not been in the ecosystem and seen how traffic light and stop sign recognition improved from "silly" to "impressive" with successive releases.

Same will happen with FSD.
Then they will add an additional data point in their quarterly safety report and show that those who have the release and engaged FSD are even less likely to be in accidents per mile driven.

Something along those lines.

I don't think that works. The people with FSD are supervising the system and saving the car from getting into accidents. So of course, it will look safer on paper. It does not tell you if FSD would have avoided the accident if the driver had not intervened. That's the data point you need. Tesla would need to analyze the disengagements and see if the disengagements were safety related or not.
 
When it goes to wide release! When it becomes available as "beta" for all owners with FSD license. Not the current limited beta.

Please explain. Are you saying that Tesla will have enough data to show when FSD is safe enough when it goes to wide release?

Sure, but driver supervision will still be required when FSD goes to wide release. So like I said, even in wide release, drivers will be saving FSD from accidents. So wide release won't tell you if FSD is safe enough unless you examine the cause of the disengagements.
 
Don't y'all think it's so crazy. Last week, we were thinking Tesla would roll out turns by having the drivers confirm this or that.

Tesla goes and rolls out no-confirm drive everywhere beta. It's just bananas. If y'all not in shock, I don't know what will shock you anymore. One day soon, Tesla will roll out 99% zero intervention rides, and then people will think that's how it always should have been.
 
Don't y'all think it's so crazy. Last week, we were thinking Tesla would roll out turns by having the drivers confirm this or that.

Tesla goes and rolls out no-confirm drive everywhere beta. It's just bananas. If y'all not in shock, I don't know what will shock you anymore. One day soon, Tesla will roll out 99% zero intervention rides, and then people will think that's how it always should have been.

I did not expect Tesla to roll out so much at once. I am particularly impressed that they did so much planning and driving policy.
 
Don't y'all think it's so crazy. Last week, we were thinking Tesla would roll out turns by having the drivers confirm this or that.

Tesla goes and rolls out no-confirm drive everywhere beta. It's just bananas. If y'all not in shock, I don't know what will shock you anymore. One day soon, Tesla will roll out 99% zero intervention rides, and then people will think that's how it always should have been.
I was thinking there was no way to do confirmations for turns but this obviously solves that issue.
It will surprise me if they roll out beta FSD to the whole fleet. I firmly believe that 99% zero intervention rides (which I also believe they will achieve) will be significantly less safe than the current beta. The issue is not software, it's human behavior. Is the average human really going to be alert enough to avoid that accident after it works flawlessly 99 rides in a row?

What will shock me is statistically proven better than human performance.
 
I don't know, but same thing can be asked of current AP. It works 99% of the time on highways, and there's nags.

And we also don't know the statistics there, on the highways. We don't know whether AP makes that safer. There is just zero data published on it.

I think one could make a reasonable argument that there are reasons that AP would improve alertness & safety statistics on the freeway (it can be very boring, and a human's attention can drift on the freeway in ways that a computer's attention does not), but the fact is we just have no idea. I actually tend to believe AP does make freeway travel safer. But I have no data. Only Tesla has the data needed, and they don't publish it. (It's actually not even clear their data would be enough...very tricky!)

It's also possible to make a reasonable argument that self-driving on surface streets is a much different situation where it may be much harder to improve safety. I tend to believe that FSD would make surface street driving less safe, at the "99 out of 100" level postulated above.
 
Last edited:
  • Disagree
Reactions: mikes_fsd
I tend to believe that FSD would make surface street driving less safe, at the "99 out of 100" level postulated above.

You may be right, but then again, you might be one of those perfectionist enemy of the good people. Even if the Tesla FSD becomes 2-4x better than humans, it'll still make mistakes that seem "stupid" to a human. And some (many) people will call for it to be banned.
 
I don't know, but same thing can be asked of current AP. It works 99% of the time on highways, and there's nags.
I still ask that question about Autopilot. It's far from proven in my opinion.
I think city FSD is fundamentally different though. There is so much less time to correct errors. Watch how people need to hover there hands over the steering wheel, that requirement doesn't change in your 99% scenario (if you want an accident rate lower than without the system).
 
You may be right, but then again, you might be one of those perfectionist enemy of the good people. Even if the Tesla FSD becomes 2-4x better than humans, it'll still make mistakes that seem "stupid" to a human. And some (many) people will call for it to be banned.
I thought you talking about 99% intervention free? That's not even close to human level performance.

All the laws are going exactly the opposite of what you're saying. More and more states are legalizing autonomous vehicles. If people really wanted them banned wouldn't there be states that were banning them? As far as I'm aware no state has made them illegal.
 
  • Disagree
Reactions: mikes_fsd
All the laws are going exactly the opposite of what you're saying. More and more states are legalizing autonomous vehicles.
The more I look into these laws the more I realize that it's a cluster $#@! of a situation. Where laws were written without understanding or consideration for different approaches to autonomy.

Tesla has been selling Model S, X, 3 & Y since 2012 and yet there are still many states that do not allow or limit their presence in their states.
I do not have high hopes for laws re autonomy to be fixed properly or quickly.