Found this thought provoking video by the youtube channel Transport Evolved: I feel like the video makes a good point. As long as we have a combination of some autonomous cars and some cars being driven by (bad) human drivers, accidents are going to happen. Expecting autonomous cars to be infallible is not realistic. And if we say that we will only accept fully autonomous vehicles when they are infallible, we could be waiting a very long time. So the question might be: if we expect that autonomous cars will be much safer than human drivers but won't be 100% perfect, what level of fallibility would still be acceptable?