I thought I had made it pretty clear that I believe we will see autonomous vehicles soon. What I don't believe is that we will see wide-spread adoption of fully autonomous vehicles that do not allow human operator input. You can think of this as a contrast between Google's model and Tesla's (current) model. Google wants to entirely eliminate the human as operator from the equation; their next generation vehicles don't even have a steering wheel, as an example.
The edge cases you seem intent on dismissing are actually the most important ones, because they frequently represent life or death situations. Like this one, as an example:
It was doubtless
unsafe to drive in those conditions. However,
NOT doing so would have been even more unsafe. And it is this aspect of human judgement which is the major issue with respect to fully autonomous vehicles.
It all boils down to the model we champion. I think Google's model - and the model of all of those who pontificate about how humans are dangerous and should not be
allowed to operate a vehicle - is a fundamentally naive one that doesn't consider the real impacts and outcomes. The more likely outcome is "creeping automation" - automation that assumes more and more of the driving task, but which may not ever fully replace human drivers in all contexts.