Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Amnon Shashua on human vs. machine mean time between failures (MTBF)

This site may earn commission on affiliate links.
I'm intrigued and a bit puzzled by (Mobileye CEO) Amnon Shashua's comments in this clip:


Here's the full video. The technology part of Shashua's talk starts around 48:30:


The part I find puzzling is that Shashua says the mean time between fatalities for human drivers is about 1 million hours. I'm feeling sleepy today so I might be getting this math wrong, but, at least in the U.S., a driving fatality occurs on average 1.13 times per 100 million miles or, by my calculation, once per 88.5 million miles. If the average time between fatalities is 1 million hours, then, by my math, that would imply an average driving speed of 88.5 miles per hour, which seems impossible. What gives?

AAA's statistics imply an average U.S. driving speed of 37 miles per hour. If we assume this is correct, that would mean there's a fatality every 2.4 million hours of driving. (Since 88.5 million miles per fatality / 37 mph = 2.4 million hours per fatality.) Is Shashua just rounding this number to the closest power of ten? Is he using non-U.S. statistics?

Shashua says the mean time between failures (MTBF) for Mobileye's automatic emergency braking (AEB) system is “once every tens of thousands of hours of driving”. Presumably, this means 20,000+ hours.

Per NHTSA, the average miles between injuries on U.S. roads is 1.2 million miles. At an average speed of 37 miles per hour, a system's mean time between injuries would have to be 33,000+ hours to match or exceed human safety.

The average miles between collisions in the U.S. is about 500,000 miles. At an average speed of 37 miles per hour, a system would need a mean time between crashes of 14,000+ hours to match or beat humans.

Another way to do these calculations is just to assume Shashua's figure of 1 million hours per fatality is correct and then use the ratio between fatalities, injuries, and collisions from NHTSA. Injuries are 75x more frequent than fatalities, so a machine needs a mean time between injuries of 14,000+ hours (i.e. 1 million hours / 75). Collisions are 177x more frequent than fatalities, so the machine needs 6,000+ hours between collisions.

From the perspective of injuries and collisions, a system that fails “once every tens of thousands of hours of driving” sounds pretty good. It's possible Shashua is being excessively conservative by using the rate of fatalities rather than injuries or collisions.

Also, it seems arbitrary for Shashua to multiply the 1 million hours figure by 10 in an attempt to exclude fatalities caused by drunk driving and distracted driving. Isn't it precisely one of the benefits of machines that they don't get drunk or distracted? Why exclude this safety advantage from consideration?

Rather than the ~1000x improvement Shashua says is needed, I would argue the it's no more than ~100x, since I don't agree with Shashua's decision to 10x the figure in order to exclude drunk and distracted driving.

Based on the injuries and collisions rates, maybe something more like ~10x improvement is needed. A mean time between failures of 200,000 hours would be significantly better than humans if ≤100% of failures caused collisions, if ≤100% of failures caused injuries, and if ≤5% of failures caused fatalities.

Even a 2x improvement would be better than the human average in the U.S. if the ratio of machine-caused injuries and collisions to machine-caused fatalities were the same as the ratio of human-caused injuries and collisions to human-caused fatalities.

Please let me know if I'm made some math error or reasoning error. I could be missing something.
 
  • Like
Reactions: DanCar
I don't think you're missing something, I just think that an apples to apples comparison is extremely hard to do.

The accident rate, or fatality rate, is not the rate of failure of the human driver. It just isn't. There are plenty of times when humans nod off, swerve out of their lane, or do all sorts of things which do not result in an accident. Also, there are accidents and fatal accidents which are not caused by a "failure" of the driver but are actually unavoidable.

By comparison, unlike a human, self driving systems are set up to record every "failure" whether it results in an accident or not. Because the current self driving systems released to the public require the driver to take over, the self driving systems are not complete.

You can't really say they are "unsafe" - because they are safe. In my opinion, and who knows, I could be wrong, there is a general misuse of the word "unsafe" as applied to a self driving system when the accurate word is "incomplete."

Just because the system is incomplete, it does not follow at all, either rhetorically or causally, that the system is unsafe.
 
  • Like
Reactions: APotatoGod
I wonder if Shashua considers an AEB “failure” to be a false positive (i.e. phantom braking) rather than a false negative (i.e. fails to break for a pedestrian). I’ve heard him cite Mobileye’s false positive rate as being very low before. But AEB tests of vehicles with Mobileye systems show a very high false negative rate.
 
When Shashua says that using 2 independent systems means the failure probability of each can be the square root of the total allowed failure probability, he is implicitly assuming that failures of the two systems are independent of each other. It is not obvious this is true. As a stupid example, if at the top of a hill the road ends at a cliff, neither camera nor lidar will detect this and the car will drive off the cliff. Dense fog is less stupid example where both perception systems could fail.
 
  • Helpful
Reactions: strangecosmos2