Early feedback on the Tesla Summon technology is that it will miss narrow objects. This is not an intelligence system, it is what is called an expert system. The main reason for this distinction is that although it will do more with more data, it won't genuinely allow you to create definitions. Let's say in two years' time, crashing drones become a problem on freeways. A learning system would not require programmers to update, but would respond from input and learn. The first time a car sees a drone fall on the freeway, then crashes into it, causing an impact (pretend the cars have impact sensors), it remembers what happened, classifies it as "Above-01," and knows to avoid it next time. Or, more simply, the first time a learning system observes a ball bouncing into a road and avoids the ball, but creams the child chasing after it, it remembers, because of the impact sensors, to come to a full stop instead of simply avoiding the ball.
There are significant ups and downs with this technology. If you want to car-jack someone, you just throw a ball into the road in front, and the car has learned to stop and wait. Easy prey. I know the movies were subject to disdain, but the Star Wars Episode 1-3 with the combat droids was realistic: artificial intelligence is good at only what it is designed to handle, and cannot learn outside that: Jedi are very, very confusing.
I think the Tesla technology, Mobile Eye, and the rest is still an exciting development, but what I haven't seen to date is good stereoscopic shape recognition, and that's basically what human eyes do. As drivers, we rarely process outside of vision and occasionally sound. The DARPA Grand Challenge of 2006 paved the way, but unfortunately, the sensor-heavy technology seems to be the focus of today's engineers, instead of true AI or better machine vision.