diplomat33
Average guy who loves autonomous vehicles
Missy Cummings in IEEE:
WHAT SELF-DRIVING CARS TELL US ABOUT AI RISKS
5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agency
![]()
What Self-Driving Cars Tell Us About AI Risks
5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agencyspectrum.ieee.org
A couple critiques:
1) I do think there is a good amount of AI fearmongering in the article. She makes it sound like self-driving cars are death traps because the AI is unreliable, does not understand what it is doing and will kill you when it makes a mistake. For example, the line "The difference is that while a language model may give you nonsense, a self-driving car can kill you." is an exaggeration. Certainly, self-driving cars are safety critical systems so a mistake can potentially kill you, yes. But the reality is that not every AI mistake will cause a fatal crash. And robotaxis like Waymo and Cruise have driven over 2M driverless miles and have not caused any fatalities. So we have real-world cases of self-driving cars driving safely and not killing anyone.
2) She seems to describe all self-driving cars as being the FSD equivalent to chatGPT. She is constantly making comparisons between chatGPT and self-driving cars. I think her comparisons might be appropriate if the self-driving car used one big chatGPT-like end-to-end NN to drive directly from vision input. But we know that the self-driving cars on the road today do not use end-to-end. Robotaxis like Waymo and cruise certainly use NN in their stack but they use a modular approach with distinct NNs instead of one big NN. This modular approach mitigates a lot of the concerns she seems to raise. The cars are not outputting driving decision directly from vision, like chatGPT might output a response directly from a prompt. There are many steps and redundancy before the driving output is made. There is redundancy of sensors to improve reliability. There is also redundancy in the software. For example, different NN that perform the same perception task so that if one NN fails, the other can catch it. There are also separate NN for perception and planning. So mistakes in perception can be mitigated by the driving policy. So a lot of her comparisons won't apply IMO.