First, this recent comic is very timely:
Sandra and Woo » [0934] Call Me A Skeptic | The comedy webcomic
The info in this thread is great, but I just wanted to look at the big picture and possibly clear up any misconceptions. So far, the only thing all production NNs do is identify patterns. The Tesla and Mobileye NNs identify lane markers, signposts, cars, etc. What they don't do is a very long list. They don't give the car steering and accelerator inputs, that's done by traditional programming. They don't "understand" what they are looking at.
Several years ago, IBM had a typical IBM PR announcement where they said their latest BlueGene computer simulated the brain of a cat. You might think that means they had a working neural net that was as smart as a cat brain. But what they had actually done was to program the same number of simulated neurons and synapses that is in a cat's cortex, gave it completely random weights and connections, and then ran it, and I think it still ran much slower that a cat's cortex would have. But the point is, they still had ZERO idea of how to actually construct a working cat's brain.
As far as actual artificial intelligence goes, we aren't that much farther along today. For instance, we have no idea how the brain understands and processes language. Reasoning? Ha!
We are at the point in AI research where people are still trying to perfect and understand the basics of how a single processing unit works (neuron or other), while using that very imperfect knowledge to do some pretty simple things.
There are groups that are tackling the bigger issues of how do you assemble these NNs into something that resembles human intelligence. Numenta is one research group, and their latest paper gives you an idea of where they are:
Numenta.com • A Theory of How Columns in the Neocortex Enable Learning the Structure of the World