Yeah, what sounds more promising to me is engineers looking one by one at the flagged examples and determining whether the human driver or the software made the better/safer decision.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
might advocate for more rigorous testing prior to approval.
Yeah, what sounds more promising to me is engineers looking one by one at the flagged examples and determining whether the human driver or the software made the better/safer decision.
Please define 'rigorous testing' for the job of driving on all US roads.
That's why engineers has to filter out why some shadow mode results are so different from real life driving: Shadow mode wants to swerve while human driver keeps on steering straight.
A controlled test, series of controlled tests... It can happen on public roads of course.
As opposed to uploading stuff on a fleet of users and seeing how it performs.
Oh, if that is all you mean, then sure. Tesla is doing that right now.
Thank you kindly.
But regulatory-wise, success of this disruptive idea depends on selling the notion to regulators that the fleet data of this will be sufficient proof of the safety of their system. Not controlled "clinical" trials, not a comprehensive approach, but an aggressive machine learning approach relying on commodity hardware deployed as quickly as possible on vast consumer networks.
And that's one reason you have for guys like Musk/Hotz/Eady talking about how unethical it would be to deny this route. The success of the concept they are rooting for depends on it. I'm not saying they don't believe it, I'm just saying this additional angle is affecting the opinion a lot. Just as a company with a more rigorous approach might advocate for more rigorous testing prior to approval.
Let's say you could deploy an autonomous system that eliminates all incidents that would have been caused by human error (e.g. negligence, inattention, poor training, etc...). You would then be replacing them with incidents caused by the behaviour of the system itself. In other words, people would be at risk simply for using the system as designed. Good luck getting any competent engineer to sign off on a system that kills its users at a non-negligible rate.
1) How to test the safety of self-driving cars.
...
2) After testing, what level of safety to require before self-driving cars are deployed.
Is the distinction between human error causing death and system error causing death really the overriding concern? Which is better: 1) a self-driving car with a statistically established rate of fatal crashes 10,000x lower than the average for human drivers, which still sometimes kills passengers due to system error or 2) manually driven cars that kill people 10,000x more often, but never due to system error?
If (1) is better simply because 10,000x is such a high number, what’s the minimum number that would make self-driving cars a better option than manually driven cars? 1.01x? 2x? 10x? 50x? 100x? How do we pick this number? Is it arbitrary? Or is there some good reason to pick one number instead the others?
I think that 2x is probably good enough, but that in principle even if self-driving cars were only 1% safer (1.01x) they would be preferable. The problem is that self-driving cars may need to be significantly above average human safety in order for people to feel comfortable using them. Most people think they are above average drivers, and we tend to suffer from optimism bias, underrating our chances of dying in a car crash. Plus some people really are above average drivers, and would actually be safer in a manually driven car.
...how much safer than human drivers do self-driving cars need to be before we should accept them?...
...airbags...