Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What would happen if Tesla renamed AP & FSD?

This site may earn commission on affiliate links.
I'm with OP on "Haters are going to hate".

These three factors remain no matter the name:
  • People/Luddites are scared of change. Tesla represents massive change.
  • Tesla is stepping on a room full of toes. Anyone seen how much market share they've taken from e.g. incumbent OEMs this year? There are trillions at stake and the hidden anti Tesla campaigns are real. One doesn't need to be a conspiracy theorist to realize that.
  • Media sensationalism/profit hunger. ADAS that kills people is juicy no matter the name.
I think a name change would instead add one more attack vector for the haters and upset existing AP an FSD owners, many would probably sue for false advertising.
 
Last edited:
Full Self Driving beta is a 100% accurate description. You can sit in the backseat when it's out of beta, not hard to understand at all.
Sure, right now it's bad enough that every sane person will monitor it like a hawk. However once it gets really good, say 1 year between errors, Tesla is going to have to do a much better job of explaining to people that just because it drove 10,000 miles without error doesn't mean it's not going to run over a pedestrian on mile 10,001.
This scenario scares me. This is even more reason to pull FSD after beta and then only allow trained W-2 employees to test it.

The ramp from it could kill you any moment to it's still 10x more likely to crash than a human, to equal or better than a human is a really dangerous space.

Even 10x more dangerous than a human, it will still be extremely rare for any individual to even experience a single fault in several years of using it. Human brains don't have the capacity to evaluate the risk of such infrequent events based on individual experience. We need to stop and evaluate data.

What the name is in that transition could help people understand the risk, but I'm not sure.

Gut check on my assumptions: I've been driving 40 years. I've been in 3 accidents. One would have been prevented by FSD, the other 2 I was rear ended waiting to turn left. No way will FSD, as currently imagined, be able to prevent that.

10x 1 accident is 10, so one accident every 4 years, for me.

I think I'm pretty average. My dad's been driving 66 years. He's been in 1 accident, when he was about 21.

Most people I know have been in 0 to 1 accidents over decades of driving.

Whatever the actual numbers are, there is going to be a gap between "FSDb scares the crap out of you every day" to "FSD is better than 90% of human drivers," where it will be in a state of "drives really well for years but is still x factor more dangerous than most drivers."

How to navigate that gap will be super critical.
 
  • Like
Reactions: Daniel in SD
This scenario scares me. This is even more reason to pull FSD after beta and then only allow trained W-2 employees to test it.

The ramp from it could kill you any moment to it's still 10x more likely to crash than a human, to equal or better than a human is a really dangerous space.

Even 10x more dangerous than a human, it will still be extremely rare for any individual to even experience a single fault in several years of using it. Human brains don't have the capacity to evaluate the risk of such infrequent events based on individual experience. We need to stop and evaluate data.

What the name is in that transition could help people understand the risk, but I'm not sure.

Gut check on my assumptions: I've been driving 40 years. I've been in 3 accidents. One would have been prevented by FSD, the other 2 I was rear ended waiting to turn left. No way will FSD, as currently imagined, be able to prevent that.

10x 1 accident is 10, so one accident every 4 years, for me.

I think I'm pretty average. My dad's been driving 66 years. He's been in 1 accident, when he was about 21.

Most people I know have been in 0 to 1 accidents over decades of driving.

Whatever the actual numbers are, there is going to be a gap between "FSDb scares the crap out of you every day" to "FSD is better than 90% of human drivers," where it will be in a state of "drives really well for years but is still x factor more dangerous than most drivers."

How to navigate that gap will be super critical.
Agree 100%, this is my concern too. Companies testing much more reliable AV's go to great lengths to assure their test drivers are vigilant (I think some have two drivers in the car at all times). The argument here is that somehow it won't be an issue because the driver knows they're responsible whereas employees know the company will be liable (though the Uber test driver who killed someone was charged with negligent homicide, seems like that case is still ongoing) .
Obviously Tesla is aware of this and has been cranking up the attention monitoring in FSD beta. Elon has said that complacency is the main cause of collisions while using Autopilot.
 
Would a W-2 employee help if the math presented is true? Let's assume it's right and the average person will only see an accident once every 4 years (10x worse than humans). That means an employee safety driver will not see an accident scenario for 4 years? At what point does the company say it's safe enough?
 
Would a W-2 employee help if the math presented is true? Let's assume it's right and the average person will only see an accident once every 4 years (10x worse than humans). That means an employee safety driver will not see an accident scenario for 4 years? At what point does the company say it's safe enough?
It's safe enough when it exceeds human performance. Tesla says that's a severe collision (>12mph = airbag deployment) 1 per 2 million miles.

It's not the status of the employee, it's the fact that these companies are watching the employees like a hawk and will fire them if they aren't being vigilant.

Here is Waymo's safety report for some context of the collision rates they saw in Chandler: https://storage.googleapis.com/sdc-...Waymo-Public-Road-Safety-Performance-Data.pdf
And there "fatigue risk management framework." (this is new, I haven't read it): https://storage.googleapis.com/sdc-prod/v1/safety-report/Waymo-Fatigue-Risk-Management.pdf
 
It's safe enough when it exceeds human performance. Tesla says that's a severe collision (>12mph = airbag deployment) 1 per 2 million miles.

It's not the status of the employee, it's the fact that these companies are watching the employees like a hawk and will fire them if they aren't being vigilant.

Here is Waymo's safety report for some context of the collision rates they saw in Chandler: https://storage.googleapis.com/sdc-...Waymo-Public-Road-Safety-Performance-Data.pdf
And there "fatigue risk management framework." (this is new, I haven't read it): https://storage.googleapis.com/sdc-prod/v1/safety-report/Waymo-Fatigue-Risk-Management.pdf
Next question - how many severe collisions have occurred on FSD Beta? We have stats for AP, both from NHTSA and Tesla, but I haven't seen stats for FSD Beta accept what Tesla reports, which is virtually 0.

For comparison, Tesla started FSD Beta in Oct 2020. So it's been on the road for just under 2 years. Telsa is WAY behind Waymo:

1661535421069.png


Is it reasonable to hold Telsa to the same safety standard as current Waymo with more than 8 years head start?
 
Last edited:
Next question - how many severe collisions have occurred on FSD Beta? We have stats for AP, both from NHTSA and Tesla, but I haven't seen stats for FSD Beta accept what Tesla reports, which is virtually 0.
I think zero in about 35 million miles which is of course very good. On the other hand someone could have a fatal collision on FSD beta and that would instantly make it very bad since fatal collisions only occur 1 per 100 million miles. And of course the population of people using FSD beta is not representative of the average population so it's really too early to make a judgement.
Most people agree that with FSD beta in its current form driver engagement is not an issue. My prediction is that the first severe collision will be because the driver "wanted to see what would happen."
 
I think zero in about 35 million miles which is of course very good. On the other hand someone could have a fatal collision on FSD beta and that would instantly make it very bad since fatal collisions only occur 1 per 100 million miles. And of course the population of people using FSD beta is not representative of the average population so it's really too early to make a judgement.
Most people agree that with FSD beta in its current form driver engagement is not an issue. My prediction is that the first severe collision will be because the driver "wanted to see what would happen."

Taking just the FSD beta is a bit too narrow. If you look at how many fatal accidents involving Tesla’s ADAS, then we get a clearer picture. After all, driving is done both on streets as well as freeways.
 
And *this* we've seen over and over - with videos of people jumping into the back seat while on AP.
Not exactly what I'm talking about and extremely rare. I'm talking about more normal people who let it run red lights and stop signs for fun.
Like this:
I don’t have any statistics but I don’t believe it is rare. Stupid people do stupid things. If it’s a minor accident it does not make the evening news.