sandpiper
Active Member
So, in the quarter ended almost 4 months ago, Google's cars drove on average 5,318 miles between incidents where the autonomous system had to be overridden by the operator.
Please elaborate on how this is "not even close" to perfecting self-driving technology.
Well... if you're talking about eliminating the steering wheel & pedals, I'd say this is still a long way from perfecting it. What were the potential consequences when the driver had to override? Probably mostly minor accidents or close calls? Probably some majors? Maybe a fatality?
Right now, in the US, people drive 3.2 trillion miles per year. At one "incident" per 5000 miles that's 640 million incidents per year. What is an acceptable rate? 1 million? 10 thousand? Any engineer or programmer will tell you that the cost & difficulty of eliminating bugs/issues increases exponentially as the number of bugs in the system decreases.
It's easy to say that the system has to be "as good or better than a person". But public opinion & courts are probably more sympathetic to a human who causes an accident and kills or injures themselves or others, than they would be toward a machine built by a big company. A jury intuitively understands the argument "We're all human and make errors." . They won't be so kind to a machine when a some 6 year old kid is killed on the road by an automated car. Would you like to the be the defense attorney arguing that "this was a corner case that the machine wasn't programmed for" and that, "yes, a person would likely not have run the kid over but that statistically 2 other kids were saved when this one died". It would ring a little hollow while the mother is sitting there in tears.
I'm sure that each individual google tester is really impressed with their machine as they happily tool around in California for 5000 miles. But the law of large numbers is still happily waiting on the bench and will have to be dealt with before the end of the game.
Last edited: