It’ll never be possible! Okay, it’s possible, but highly unlikely, though not insurmountable that a group of monkeys couldn’t randomly stumble on it... or rather given enough effort. Get your story straight!There's a couple things wrong here. First, I am suggesting that true full Level 5 autonomy may never be possible. But especially with neural networks.
Second, Noam Chomsky is an interesting character, but he holds zero sway in this area of study. Simply put, this isn't his area of expertise.
Finally, and by far most importantly, these things aren't learning. Not in the biological sense, not in the classical sense, not in any sense. These networks have no intelligence either. At all, or in any sense. What they are is multi-layered, multi-variable probabilistic calculations. It's not that they don't learn the way a human brain does, it's that there's no learning involved at all. A training process twiddles weights and biases until the network outputs something desirable. That's not learning.
People really need to stop attributing agency to these machines, because there is none. It's a computer algorithm, and that's it. It's the same was it was in the 1950s.
Accurately? Probably not. But the odds of solving level 5 driving in 5 years is as close to zero as makes no difference. So maybe someone gets lucky, or maybe someone throws an unbelievable amount of programmers at the problem, but those are about the only ways this is happening in the next 5 years. Basically it's the odds of infinite monkeys with typewriters coming up with Shakespeare. An infinitesimally probable event.
A million times, THIS. These things aren't magic. It may appear like magic to the uninformed, but it's just software running in a (specialized) processor running matrix math- That part of Algebra class most people slept through.
Honestly, I’m having a hard time following your reasoning. Are you suggesting Level 5 requires ‘general intelligence’ and is such never possible? Or maybe possible, but not likely or soon? Again, it’s that shifting goal-posts of non-zero absolutes, maybe.
Neural nets aren’t magic and even if you want to debate if their training is true ‘learning’, it’s really irrelevant; can a probabilistic algorithm determine drivable space within X bounds, 99.9% of the time? So far that seems to be a very reasonable outcome. Can that process not be extended to the different components/logical rules of driving in our societies? This seems obviously true to me, even if it took 200 different NN’s. There’s no reason each one couldn’t end up being as skilled as say AlphaGo - why would this be any different?
And really, humans aren’t prophetic, we do not intuit how to drive. We learn very much clearly defined rules through teaching and put that together with an ability to recognize patterns - to join the masses doing the exact same thing, with varying levels of success by practicing and reinforcement learning. These things are called Neural Nets for a reason.