Also remember not all humans can drive in all conditions. We pull over for limited viability, or slow way down. If conditions are terrible sometimes we turn back. Furthermore, there’s a lot more people that try to drive when they probably should have pulled over or turned around.
This isn't an effective argument. The discussion is this- If level 5 autonomy is ever a thing, then it is likely that legislation outlawing human drivers will follow.
If that is the case, then we will rely on machines to drive themselves in all conditions that exist, just like we rely on (some number of) humans to do so today. Perhaps there are people not properly trained, but that's not what's being discussed here. If there is a condition on earth that requires driving for any reason, then a true level 5 system
must be able to handle that.
Otherwise, you get an automated ambulance that lets you die in the back because it's too foggy. The whole purpose of the blended suites of sensors is to give autonomous systems super human qualities.
Full autonomy is certainly a difficult problem but the idea that full autonomy is unsolvable is pretty silly IMO.
You're entitled to that opinion. I believe that people not realizing the complexity of the problem and just hand-waving it away by saying some magical future technology will make everything automatically better is silly.
Heck, a lot of engineering problems seem unsolvable at first but eventually better technology comes around that makes the problem easier.
See, now you're making claims in my area of expertise. Can you name a single computer engineering problem that was considered impossible, and the simple march of time produced the solution?
So never say never. Also remember that just because a problem is extremely difficult that is not the same thing as being unsolvable.
I haven't said never, but I'm about as close to saying never as could be given what I see as the present state of the art. We don't need to have a discussion about complexity versus whether a problem is solvable. That's not really the crux of what I'm driving at here. Most of driving requires things like
intuition and
reasoning. Both are things that computers do not do, and likely will never be able to do. And I'm beyond doubtful that simply rubbing some neural networks on the problem is the solution. That still leaves 90% of the problem to be solved, since as of right now NNs are pretty much only being used for the sensor suites. Just slapping an LSTM on a data stream doesn't tell a computer "hey, it's 3:15pm in north america on a school day, so there's a high chance of a kid popping out randomly from the side of the road". Computers will never get that eerie sense that humans do, which tells us to be on the lookout for something odd.
The
only benefit I see computers offering right now, and possibly forever into the future, is that they don't fall asleep or get distracted. And they react faster in most situations. Or at least they
can react faster.
Just remember, the first time a robot car runs over a blonde white girl in a rich neighborhood, this stuff is going to get regulated and clamped down on big time. And given what I see from all of the players that are publicly making waves, and those keeping much more quiet, that day is guaranteed to come. I really hope I'm wrong, and that the industry does a better job of controlling itself, but we already see the BS that Uber pulled last year when they killed that lady in Arizona. They're not the only ones out there making stupid choices.