Human error being the main cause of traffic accidents includes distracted driving (#1), bad weather (carelessness), recklessness, speeding, and intoxication. It would be hard to argue that self-driving cars will not help eliminate human error. Even the best skilled drivers make bad decisions, and as anyone who drives routinely knows there are a lot of unskilled, dare I say clueless, drivers on the highway. Add to this pedestrians, wildlife, and anything that could fall into the roadway, (and ones reaction to them), there is a considerable amount of unpredictability that needs to be included in an autonomous model (driver ontology). My argument is the level of randomness needed to model the behavior of a driver will be in some respect proportional to the level of total system autonomy. Although autonomous vehicles will be able to achieve a high degree of autonomy, it will be difficult to achieve full self-driving, without external influence (e.g. geofencing), until the models can eliminate ‘humans’ in the ontology.
A human centric autonomous vehicle model will require external influence (this in not level 5). The reasoning needed to achieve level 6 must be independent of human operational decision-making. I’m not begging the question here. Although “by definition” level 5 is free of human operational decision-making and control, a level 5model will need to make decisions following a strict protocol based on present meteorological and highway conditions. In other words it’s based on the limits of the vehicle within the system not the limits of driver reaction and situational awareness. If there ever is a locally interpreted level 5 model I believe it will require all other vehicles to be at level 3/4 (some form of peer-to-peer situational awareness). We will have to wait until human driver decision-making is relegated to merely picking a destination for fully robust and resilient autonomous system is realized, (consider the OODA loop).
On final thought. It is relatively easy to create passable test conditions when one knows the limits of what’s being tested, (I’m not imply smoke and mirrors). However, we have some real world feedback and at present none of these ‘systems’ are at level-2.
“Artificial intelligence is no match for natural stupidity.” ~ unknown (at least I don’t know)
Edit: replaced level 6 with 5.