Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Elon has said that getting to safer than a human is pretty easy, what's difficult is getting to 10x, 100x safer than a human

And the next question is what factor of human safety will be required to mitigate sufficient risk to allow unlocking of generalized robotaxis. Maybe robotaxis need to be 1000x safer than a human? Who knows, Elon doesn't know, he's just making it up as he goes because what he does know is that nobody has the answers yet.


In 2021 in the US there were 39,500 fatal crashes that caused almost 43,000 deaths, so a robotaxi 1000x safer than the average human would theoretically have about 4 deaths per year right
 
Elon has said that getting to safer than a human is pretty easy

he also said self driving was easy- he said it was basically a solved problem.

In like 2015.

Only many years later did he admit he was wrong, and that it's actually hard.

But it'll be solved Real Soon Now (on repeat for more years now)

Who knows, Elon doesn't know, he's just making it up as he goes because what he does know is that nobody has the answers yet.

And the same is true for "how much in car compute is needed for >L2 driving safely"

But Elon has told us HW2 was 'enough' when it came out- this was untrue.
Then they said HW2.5 was 'enough' when it came out-this was untrue.
Then they said HW3 was 'enough' when it came out-- Elon appears to have JUST doubled down on that- but I'd bet real $ it's also untrue.
The goalposts will move to HW4 at that point and it'll get real awkward given the lack of any upgrade path so they'll put off admitting it as long as they can.



In 2021 in the US there were 39,500 fatal crashes that caused almost 43,000 deaths, so a robotaxi 1000x safer than the average human would theoretically have about 4 deaths per year right

I think even 2x safer (if you had a LOT of evidence that rate was true) would be just fine to mitigate liability in court- but then I'm not on a jury hearing such a case. Certainly 10x ought be PLENTY "safer" enough for any sane jury. 1000x seems insanely unrealistic, especially since RTs will still be in crashes caused by OTHER non-RT cars for a good while.

The issue remains we have no evidence Teslas system (or ANYONES system) is capable of even just as safe when operated unsupervised over a significant amount of time.
 
I think there’s very little chance a company owns liability in a generalized Robotaxi at just as safe, 2x safer, or likely even 10x safer than the average human driver. Elon has previously thrown this number out, among others over the years

1699632830214.png

But here I’m not even sure he’s talking about a generalized Robotaxi rather than a Level 2 ADAS -- we can have a Level 2 ADAS that greatly enhances safety with a human backup driver but doesn't shift liability to the company.

And more numbers in the 4Q21 earnings call where he talks about the ease of FSD becoming better than a human

Elon said:
So, it's -- yeah. So actually being better than a human, I think, is relatively very forward, frankly, how do you be 1,000% better or 10,000% better. Yeah, that's what, you know, gets much harder. But I think anyone who's been in the FSD beta program, I mean, if they were just to plot the progress of the beta interventions per mile, it's obviously trending to, you know, a very small number of interventions per mile and pace of improvement is fast.
Source: Tesla (TSLA) Q4 2021 Earnings Call Transcript | The Motley Fool
 
Last edited:
  • Like
Reactions: DrChaos
At 10x safer a company will own and operate generalized Robotaxis.

Remember Uber operates taxis now at 1x - so do a lot of other companies.

They will be insured by insurance companies.
I'm pretty sure Uber drivers are individually liable for their negligence causing an accident, a company like Uber is only liable for accidents where they were negligent in their selection of drivers and their negligent driver selection is proven to be the cause of an accident.

Here's the full context for the question about Level 4 + safety factor in the 4Q21 earnings call, timestamp at 42:06

 
At 10x safer a company will own and operate generalized Robotaxis.

Remember Uber operates taxis now at 1x - so do a lot of other companies.

They will be insured by insurance companies.
This also means that the insurers will be the ones to decide the actual risk with their dollars and won't be fooled by bullshit. Like climate/hurricane risk in Florida, they will look at objective science as much as possible for pricing. A non-affiliated insurance company agreeing to take on liability is a good signal of safety.

At the moment, I think nobody could get insurance other than Waymo.

There are now leaks from Cruise employees how their bosses were pushing for deployment too fast, overhyping the technology and its maturity. Seems familiar.

It also shows likely lower morality from GM ownership as opposed to Ford, which cut back/killed their autonomy investment. I think Farley at Ford is honest and understands the problems well, and how difficult everything with EVs and driver assist really is.
 
  • Like
Reactions: JB47394 and EVNow
This also means that the insurers will be the ones to decide the actual risk with their dollars and won't be fooled by bullshit. Like climate/hurricane risk in Florida, they will look at objective science as much as possible for pricing. A non-affiliated insurance company agreeing to take on liability is a good signal of safety.
Infact I think, just like liability insurance is needed for human drivers, it should be required for robotaxi companies too.
 
Not sure what's so difficult to understand, AP is and has been better than humans because humans lack full attention and can fall sleep, etc. There are probably countless stories of AP saving someone from after dozing off (my coworker told me of hers last week).

It can be safer than a human in some cases (awareness of moving objects all around is better than humans today, so AP/FSDb on divided highway is good), and yet totally fail as a hypothetical L4 robotaxi.

The safety we are seeing today is because humans override it and turn it off where the humans know it doesn't work well. I haven't crashed ever with FSDb (or my own driving since owning it). Would it have been unsafe or undesirable driving without human intervention? Yes. Certainly some illegal moves, and very annoying to other human drivers and likely to induce others to have accidents..

As for fsd safer than human, yea it's late by 4-5 years so far (3 years if you count from autonomy day), a bit later than Musk standards.
 
I'm pretty sure Uber drivers are individually liable for their negligence causing an accident, a company like Uber is only liable for accidents where they were negligent in their selection of drivers and their negligent driver selection is proven to be the cause of an accident.
GPT-4 disagrees with you.

“Generally, Uber and Lyft are not liable for car accident injuries unless the driver who caused the accident was working for them at the time of the crash. This means that the driver must have been either en route to pick up a passenger or transporting a passenger when the accident occurred. If the driver was offline or waiting for a ride request, then Uber and Lyft are not responsible for the driver’s actions.“
 
GPT-4 disagrees with you.

“Generally, Uber and Lyft are not liable for car accident injuries unless the driver who caused the accident was working for them at the time of the crash. This means that the driver must have been either en route to pick up a passenger or transporting a passenger when the accident occurred. If the driver was offline or waiting for a ride request, then Uber and Lyft are not responsible for the driver’s actions.“
There are definitely unknowns here and that alone is a risk

I know you and Knightshade aren't looking at this through rose-coloured glasses, and yet I think Elon is more pessimistic about Level 4+ lol and that he would make it very clear if he was pressed with the right questions and zero ambiguity between FSD as a Level 2 ADAS and owning liability across a fleet of generalized robotaxis. Not two years ago he was talking about the difficulty of achieving 10-100x human safety in the context of Level 4+, with a question that was short but at least framed correctly.
 
I think even 2x safer (if you had a LOT of evidence that rate was true) would be just fine to mitigate liability in court- but then I'm not on a jury hearing such a case. Certainly 10x ought be PLENTY "safer" enough for any sane jury. 1000x seems insanely unrealistic, especially since RTs will still be in crashes caused by OTHER non-RT cars for a good while.
Sorry, I have to disagree with you here (and ironically, agree with Elon.) People don’t get statistics and juries are stupid. (if you need evidence on people’s complete ignorance of statistics, look at the comments around the covid vaccine and vaccines in general)

No one will see the accidents that are prevented, they’ll only see the accidents that happen and say “a human never would have made that mistake!” Then there will be congressional hearings with someone crying how a self driving car that was recklessly released on the unsuspecting public killed their [insert relationship here] and they’ll never get them back but if they can prevent just one death by testifying it will all be worth it. Then some senile senator whose idea of technology is a coffeemaker that has a timer on it and had a hard time figuring out how to use his electric razor will write a bill to ‘protect the public.’
 
The safety we are seeing today is because humans override it and turn it off where the humans know it doesn't work well. I haven't crashed ever with FSDb (or my own driving since owning it). Would it have been unsafe or undesirable driving without human intervention? Yes. Certainly some illegal moves, and very annoying to other human drivers and likely to induce others to have accidents
It’s really hard to judge the overall safety partly because of what you said and partly because safety depends on what kind of driving you’re doing - day/night, city/suburban/rural, surface streets/highways, etc.

as a hypothetical, how would/should we deal with a system that was 10x better than humans on highways but only 0.8x as good on city streets?
 
as a hypothetical, how would/should we deal with a system that was 10x better than humans on highways but only 0.8x as good on city streets?
In Europe legislators would allow the system to operate on the highway, but not on city streets :)
In the US, Waymo is already deployed driverless in cities (since 3+ years now) and I don't think they claim safer than a human, but they just might be by now. Apparently it's doable if you have the right approach. No need for hypotheticals.
 
Last edited:
Operating a small geofenced robotaxi setup is very different from Tesla flipping a switch and activating a fleet of generalized robotaxis potentially numbering in the millions -- these are totally different risk profiles

Waymo vehicles are also loaded with plethora sensors and redundancies, and they're still limited in number and in where they will operate, much less trying to operate everywhere with vision-only
 
Operating a small geofenced robotaxi setup is very different from Tesla flipping a switch and activating a fleet of generalized robotaxis potentially numbering in the millions -- these are totally different risk profiles

Waymo vehicles are also loaded with plethora sensors and redundancies, and they're still limited in number and in where they will operate, much less trying to operate everywhere with vision-only
How is any of this relevant to the insurance/liability discussion?
 
  • Like
Reactions: LarryClay
How is any of this relevant to the insurance/liability discussion?
It's much easier to test a geofenced system and the potential unknowns for a system that can operate anywhere are orders of magnitude greater.

If you've been to different areas of the country, you can appreciate the significant differences in roads, driving styles, etc. A system geofenced to one city only needs to be 'taught'/programed to deal with drivers and road styles of that city vs the entire country.
 
  • Like
Reactions: DrChaos
It's much easier to test a geofenced system and the potential unknowns for a system that can operate anywhere are orders of magnitude greater.

If you've been to different areas of the country, you can appreciate the significant differences in roads, driving styles, etc. A system geofenced to one city only needs to be 'taught'/programed to deal with drivers and road styles of that city vs the entire country.
That's all true, but even more important I think is the testing by the provider of the routing and mapping within the geofence---there sure seem to be many mapping errors or uncertainties (given Tesla experience) that likely need to be manually annotated after testing. At least all major complex intersections & routes
 
No one will see the accidents that are prevented, they’ll only see the accidents that happen and say “a human never would have made that mistake!” Then there will be congressional hearings with someone crying how a self driving car that was recklessly released on the unsuspecting public killed their [insert relationship here] and they’ll never get them back but if they can prevent just one death by testifying it will all be worth it. Then some senile senator whose idea of technology is a coffeemaker that has a timer on it and had a hard time figuring out how to use his electric razor will write a bill to ‘protect the public.’
I've long worried about the emotional reaction that is likely to result from an AI system doing something "stupid" that any human driver would never do. But, If you're right, there is no reason to even pursue AI driving, because they'll have to have some "stupid" accidents as they continue to learn and improve. It does really depend on statistics to get through this phase of development /learning. You can't put the vehicle on the road until it's statistically superior to human drivers because that's your only justification that the "stupid" accident, statistically, is worth it. Then it continues to improve and you get to 2x or 3x or 10x or 15x or 100x with continuous training. The "stupid" errors will happen and if you can't justify that they are a necessary stepping stone and statistically acceptable on the road to continued death and injury reductions, then give up now.

I think it's a hard sell and will pull jury's heartstrings for sure....
 
I know you and Knightshade aren't looking at this through rose-coloured glasses,
No need to get personal. You may *think* you know a lot about me, but you would be quite wrong ;)

I'm talking about "a company" - not specifically Tesla or Waymo. Essentially if I have a company that can do FSD at 10x human level, I would have no problem insuring the company and offering liability protection.
 
Last edited: