MIT Technology Review has some really interesting articles about this subject (but is down right now - and I am not sure if you can get full articles without being subscribed). In recent months they have covered both the non-Tesla view (from GM, Google and others) - talking about the many challenges not yet solved, and also some of the moral issues.
When a human kills a pedestrian, or another driver there is the assumption that an error was made resulting in a death. An AI has a much more real chance of having to evaluate - squash one person or crash into one or more vehicles risking more people. In reality we probably do make that panicked evaluation in a crash situation - but not consciously. The net on the other hand will be making a 'conscious' decision.
I recently got my AP1.0 CPO, in large part because having followed the technology for a while, I am not convinced AP 2.0 will ever be cleared for Level 5 autonomy - so I might as well enjoy a cheaper vehicle now and get AP 3.0 or 4.0 later.
Gates is quoted as saying we over-estimate what is possible in the next 2 years, while under-estimating what is possible in the next 10. Yet he, and many of his peers believed that speech would be a primary input device within a matter of years - in the late 80's!
I suspect we will find that to achieve Level 5 'autonomy' in the near term, we have to adjust expectations and for all vehicles and some/much signalling to cooperate. Right now we are trying to automate a transit system which nobody in their right mind would build new in it's current state. Let's put inexpert operators in charge of increasingly fast vehicles, with no constraints on their direction and speed of travel other than advisory signs and lights - and let's allow them to add new distractions to their environment every year - madness.
If cars 'talked' to each other, they could advise "I'm braking now", or "There is a 3 foot obstruction in the right-hand edge of the right lane, 30 yards ahead of me" - instead of each (fleet of) car(s) trying to separately evaluate the environment and work out why and what the vehicle in front is doing. Yes there is the "follow me off the cliff - or into the bridge parapet" issue to deal with - but my point is that cooperation will drive safer vehicles, not massive independent effort to build 15 versions of a wheel. When governments wake up to that, we will see the communication and interaction standards being defined and mandated to to simply the puzzle.
Of course the question becomes "what of humans"? The reason the early AP cars are doing this today is because they have to account for wet-ware piloting their ICE projectiles. To that, I suspect you will see city block and lane designations which will effectively be "Autonomous Only" - where the traffic flows can easily be signaled to reroute, slow down, etc as required for safety and other reasons.
If we continue to try to make the car replicate the human driving experience we are pursuing the right end game, with the wrong strategy (IMO). If we want safe, driver=less transportation we should design a system to do just that, not try to adapt a very broken system (100 deaths per day in the US, road rage, horrendous commute times, major pollution issues).
Just my 2c.