Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Why it is is very, very dangerous to be precautionary about self-driving cars

This site may earn commission on affiliate links.
This is the Musk/Hotz argument, and I guess adopted by a lot of the Silicon Valley community on deep learning vision AI. The OP @Trent Eady is a Tesla-positive writer on Seeking Alpha, whose opinion certainly aligns with this vision.

It is not a bad argument, but it needs to be noted there is a lot of hyperbole behind it - and a self-serving purpose. Let me explain.

The self-serving purpose is that this "vision AI" community wants and believes it can jump ahead of the more comprehesive autonomous driving projects (think multi-redundant suites, redundant computers and systems, rigorously implemented and tested driving policy, responsibility taken by the car company for the driving etc.) through cheap sensor suites (on the extreme think cell-phone level cameras), fleet learning and especially deep learning for driving (think Tesla FSD, comma.ai, no responsibility for the driving taken by the maker of the system).

Why is this a self-serving purpose? Because this vision AI community does not have the resources or the time (they are behind on both) to go the comprehensive route. To jump ahead, they must rely on aggressive deep learning and fleet validation, which they see as their opportunity and believe is the disruptive opportunity more traditional players are missing with their redundant sensors and lidars and manual labelling/teaching and what not.

To put their idea to the extreme it goes something like this: Let's just strap cheap cameras on cars, hook them up to a barely powerful-enough generic CPU/GPU running a neural network, drive them a lot (on the roads and perhaps in simulators) so the NN learns to drive, deploy this to massive fleet with data collection, hand out the fleet data to regulators (shadow mode and/or real mode), rinse and repeat enough times and we're there. That's basically the disruptive idea. It is not a bad idea.

But regulatory-wise, success of this disruptive idea depends on selling the notion to regulators that the fleet data of this will be sufficient proof of the safety of their system. Not controlled "clinical" trials, not a comprehensive approach, but an aggressive machine learning approach relying on commodity hardware deployed as quickly as possible on vast consumer networks.

And that's one reason you have for guys like Musk/Hotz/Eady talking about how unethical it would be to deny this route. The success of the concept they are rooting for depends on it. I'm not saying they don't believe it, I'm just saying this additional angle is affecting the opinion a lot. Just as a company with a more rigorous approach might advocate for more rigorous testing prior to approval.

Now, of course all of these players will mix and match. Some "vision AI" guys will have secondary sensors and redundancy more than others - while some traditional players will certainly employ also techniques similar to the vision AI guys. In the end they might even all end up in the same place, having just taken wildly different routes to get there.

We shall see who succeeds, who gets there first and who is right. I do share the concern what an early, high-profile crash might do to autonomous efforts. Let's hope someone doesn't go ahead too fast either to push it back for all.
 
Last edited:
Yeah, what sounds more promising to me is engineers looking one by one at the flagged examples and determining whether the human driver or the software made the better/safer decision.

That is one way that the AI can become aware that its response was sub-optimal. But it is very labor intensive, and there are other ways.

Thank you kindly.
 
Truth be told, I find the premise of this thread to be ridiculous. Arguing that your autonomous system should be deployed simply because it is "N times safer than a human driver" is outright lazy and reckless, not even mentioning how vague and meaningless the statement is to begin with. Let's say you could deploy an autonomous system that eliminates all incidents that would have been caused by human error (e.g. negligence, inattention, poor training, etc...). You would then be replacing them with incidents caused by the behaviour of the system itself. In other words, people would be at risk simply for using the system as designed. Good luck getting any competent engineer to sign off on a system that kills its users at a non-negligible rate.

Real engineering needs to be done in the form of system architecture design, algorithm development, risk analysis, etc... - not empty hand-waving from context and knowledge-free utilitarian calculus.
 
Last edited:
  • Disagree
Reactions: strangecosmos
Oh, if that is all you mean, then sure. Tesla is doing that right now.

Thank you kindly.

Truth be told, we have no idea what Tesla is doing to validate their FSD, probably not much yet beyond development-time testing, but even that's just guesswork.

We have at least one source saying that Audi is testing a number of scenarios in a controlled manner in public traffic for their Level 3 system. So they have a check-list for scenarios they are testing in public traffic, and I expect also on simulation.

For Tesla all we know is they too are using simulation for something (based on hires), but whether or not they intend to prove the merits of their FSD through controlled testing or through fleet-validation with users driving in a non-controlled manner with shadow mode, all we have is Tesla's comment that they intend to do at least the latter.

What controlled tests Tesla intend's to do, we don't know. However, I am sure they will mix and match these approaches and do controlled tests also. How rigorously and which is their primary approach, hard to tell. So far they aren't saying much anything on FSD.
 
what counts is fatality reduction, not other safety measures (like non fatal accident damage)

airplanes are under pilot control, even with extremely comprehensive autonomous capability, expect the same for cars.

Q
whats safer than an autonomous car?

A
an autonomous car under human control.
 
But regulatory-wise, success of this disruptive idea depends on selling the notion to regulators that the fleet data of this will be sufficient proof of the safety of their system. Not controlled "clinical" trials, not a comprehensive approach, but an aggressive machine learning approach relying on commodity hardware deployed as quickly as possible on vast consumer networks.

And that's one reason you have for guys like Musk/Hotz/Eady talking about how unethical it would be to deny this route. The success of the concept they are rooting for depends on it. I'm not saying they don't believe it, I'm just saying this additional angle is affecting the opinion a lot. Just as a company with a more rigorous approach might advocate for more rigorous testing prior to approval.

There are two quite distinct topics here:

1) How to test the safety of self-driving cars.

2) After testing, what level of safety to require before self-driving cars are deployed.

How to test the safety of self-driving cars

Tesla’s self-driving cars can and should be subject to all the same rigorous testing that Waymo’s self-driving cars are subject to, including controlled experiments. And more.

I am concerned Waymo is not doing enough rigorous testing simply because it doesn’t have access to the amount of data that Tesla does. The RAND Corporation estimates that 11 billion miles of driving data may be needed to get robust statistical evidence that self-driving cars actually improve safety. Even driving non-stop 24/7, Waymo’s fleet of 100 self-driving minivans would take 500 years to get to 11 billion miles. So, how will we know that Waymo’s vehicles are actually safer? What if they’re in fact endangering lives? What hard evidence will we have one way or the orher?

Assuming Model 3 production goes as planned, Tesla’s HW2 cars will drive 11 billion miles by 2020. If shadow mode is actually a workable substitute for real driving, then Tesla will have hard evidence for the safety of its self-driving software by 2020. HW2 cars will do at least another 20 billion miles in 2020, so the software can be continually improved and re-evaluated.

So, it is possible that Waymo will never do rigorous testing of its self-driving vehicles and deploy them anyway, blindly taking the risk that it could be endangering people’s lives. Tesla, on the other hand, plans to use shadow mode as a form of rigorous testing that yields hard statistical evidence on the safety of its self-driving software.

What level of safety to require

Now, the second topic. Once we’ve rigorously tested self-driving cars and know how safe they are relative to the average human driver, what level of safety should we require before we deploy them to the public?

Let me take an obvious example. Suppose we have self-driving cars that are 1000x safer than the average human driver. What is the more ethical decision: a) hold back these cars from the public until they achieve 10,000x safety or b) deploy them? I think most people would agree (b) is more ethical.

Here is a less obvious example. Suppose we have self-driving cars that are 2x safer than the average human driver. What is the more ethical decision: a) hold back these cars from the public until they achieve 10x safety or b) deploy them? Not everyone agrees on what the more ethical decision is. I argue that (b) is more ethical because even a 4-year delay would result in the deaths of 170,000 people in the United States alone, according to the RAND Corporation’s model. I believe there is a moral obligation to act when inaction which cause people to die. Not everyone agrees on this point.
 
Last edited:
Let's say you could deploy an autonomous system that eliminates all incidents that would have been caused by human error (e.g. negligence, inattention, poor training, etc...). You would then be replacing them with incidents caused by the behaviour of the system itself. In other words, people would be at risk simply for using the system as designed. Good luck getting any competent engineer to sign off on a system that kills its users at a non-negligible rate.

Is the distinction between human error causing death and system error causing death really the overriding concern? Which is better: 1) a self-driving car with a statistically established rate of fatal crashes 10,000x lower than the average for human drivers, which still sometimes kills passengers due to system error or 2) manually driven cars that kill people 10,000x more often, but never due to system error?

If (1) is better simply because 10,000x is such a high number, what’s the minimum number that would make self-driving cars a better option than manually driven cars? 1.01x? 2x? 10x? 50x? 100x? How do we pick this number? Is it arbitrary? Or is there some good reason to pick one number instead the others?

I think that 2x is probably good enough, but that in principle even if self-driving cars were only 1% safer (1.01x) they would be preferable. The problem is that self-driving cars may need to be significantly above average human safety in order for people to feel comfortable using them. Most people think they are above average drivers, and we tend to suffer from optimism bias, underrating our chances of dying in a car crash. Plus some people really are above average drivers, and would actually be safer in a manually driven car.

I have not been able to find any quantitative information on the skill distribution of drivers. I want to know what the crash rate of the best drivers is compared to the average. It is more convincing and reassuring if self-driving cars are not just safer than average, but safer than the best human drivers.
 
1) How to test the safety of self-driving cars.

Simulation is a great preparation but someone has to setup all different scenarios and apparently, they forgot to practice one for Navya Arma autonomous van in Las Vegas on Wednesday, 11/08/2017.

SpaceX does a lot of simulation but its rockets still explode in real life from time to time.

...
2) After testing, what level of safety to require before self-driving cars are deployed.

That depends on whether you are on Google's or Tesla's camp:

Google saw its pilot testers fallen asleep at the wheel. It then believes that less-than-perfect automation system is not acceptable. It purposely withholds its technology from the public until someday that it will be safe to sleep in a driverless car.

The press also reported that Tesla engineers quit when the company announced Full-Self Driving Capability.

---------------

It is difficult to judge whether Tesla saves more lives or Google does.

It might be great in theory but so far:

There has never been any fatality associated with Google's car's automation effort but there's 1 documented Tesla Autopilot fatality and numerous reports of accidents while owners rely on Autopilot (including when they mistakenly think it's on such as the lawsuit from South Korean star Ji Chang Son) .
 
Last edited:
Is the distinction between human error causing death and system error causing death really the overriding concern? Which is better: 1) a self-driving car with a statistically established rate of fatal crashes 10,000x lower than the average for human drivers, which still sometimes kills passengers due to system error or 2) manually driven cars that kill people 10,000x more often, but never due to system error?

You're not actually quantifying anything here, but I will agree to preferring #1 if the risk of death is negligible or a freak occurrence. But that's not what you were arguing in your initial post. You were making a utilitarian argument that any factor better than a human driver - whatever that even means in the first place - is preferred.

If (1) is better simply because 10,000x is such a high number, what’s the minimum number that would make self-driving cars a better option than manually driven cars? 1.01x? 2x? 10x? 50x? 100x? How do we pick this number? Is it arbitrary? Or is there some good reason to pick one number instead the others?

Integrity numbers are completely arbitrary and depend on the needs of the deployed system. For example, in weapons-grade navigation, integrity numbers are not that stringent - only enough to make sure the payload reaches its intended target and not destroy something else. If the weapons vehicle lands in the ocean instead, nobody will care. In safety of life applications, designers become very risk-averse. Nobody wants to be a part of a system where fatal crashes are a feature.

I think that 2x is probably good enough, but that in principle even if self-driving cars were only 1% safer (1.01x) they would be preferable. The problem is that self-driving cars may need to be significantly above average human safety in order for people to feel comfortable using them. Most people think they are above average drivers, and we tend to suffer from optimism bias, underrating our chances of dying in a car crash. Plus some people really are above average drivers, and would actually be safer in a manually driven car.

Two points.
  1. Throwing the 2x number out there (especially without even defining what any of this means) just looks like wild speculation without any background in system integrity design. Do you have any? What kind of navigation or control systems have you dealt with that guide this estimate?
  2. The rest of what you wrote is just the other side of the coin of what I was talking about - the user and business perspective.
 
My overarching question is: how much safer than human drivers do self-driving cars need to be before we should accept them? How do we pick that number? Is it arbitrary? Just based on tradition? Or gut feeling? Perhaps there is a systematic way to pick a number.

What I think we need is data from insurance companies (and maybe police departments or government agencies) on the crash rates of the best drivers (as opposed to the average crash rate, which we already have). Then we can set that as the benchmark. That way, no individual person is increasing their risk by getting in a self-driving car.

In principle, I consider it to be morally irrelevant whether human error or machine error is the cause of death. If two people are killed by a human, that's twice as bad as if one person is killed by a machine. Put another way: would you rather accept a 25% chance of being killed by a machine or a 50% chance of being killed by a human? Maybe most people disagree, but I would like to see polling on this question before jumping to conclusions.
 
...how much safer than human drivers do self-driving cars need to be before we should accept them?...

The question is academic which is good for discussion but not much for practical implication.

Put it in another way, it is just like asking would passengers accept a pilotless airliner?

We can cite all statistics and arguments but at the end of the day, a pilotless airliner just needs to get one that can fly mails, luggage, cargo, commercial products with no human at all and see how safe it is first!

Too many decades of talks but we are still at the very infancy stage in practice but lots of hype that it's here!

In the US, there have been numerous states allowing autonomous vehicle testings but there are 3 states that go further and allow autonomous vehicles in general as long as they follow traffic laws:

9/20/2016 Florida
12/10/2016 Michigan
5/30/2017 Georgia

The laws are there so it's time to do it rather just talking about it.
 
An interesting point of comparison here is airbags. Airbags cause somewhere in the range of 10-20 deaths per year in the United States. (One study argues that airbags actually cause more deaths than they prevent! :eek:) This is a case where we’ve accepted that a technology will sometimes kill people because we think it will save the lives of even more people.

If autonomous cars were 2,500x safer than human-driven cars and received 100% adoption, then there would be 15 deaths per year caused by autonomous cars in the United States. If it’s okay for airbags to kill that many people, surely it’s also okay for autonomous cars to.

I’m of the opinion that negative net deaths is good and desirable, but since robots are scary some people think the public will only accept robot cars if gross deaths are below a certain level — even if net deaths are massively negative.
 
Last edited:
...airbags...

It is wise to be cautious with any technology because it's just science: If there's a fault it's likely correctable!

Airbags save more lives than kill ones. However, that doesn't mean NHTSA should not do an airbag recall.

NASA felt its program was so safe so it sent a teacher (a civilian) and her space shuttle exploded.

Uber also saw Autonomous driving was so safe with Waymo so it initially refused to comply to California DMV Autonomous Vehicle Registration and went to another state and killed a pedestrian instead.

It was more acceptable to be maverick about Autonomous Vehicles before the pedestrian's death.

When Autonomous Vehicle just killed a pedestrian, this is not the time to say it saves more and kills less but it is time to be more cautious and analyze what went wrong and how could the problem be fixed.