A common misconception popping up is that your car is going to learn - it doesn't. The learning is done on a big supercomputer that is running the "learning" algorithms. The one from Nvidia, for instance, is $130,000 list. Nobody really knows exactly what is going on, but it is likely that our cars are collecting information that is being sent back to SkyNet to do the learning. Nvidia's demo, again only for instance, says that their software requires a car to be driven through a particular environment 20 times in different conditions (light, weather) to learn how to navigate it. See:
Even though Tesla Vision is in-house and not Nvidia's system, I'd imagine that it is similar in its approach - observe drivers passing through a particular environment in multiple conditions and "learn" how to do it. Then that behavior is generalized to other locations whenever autosteer is engaged.
So a few things that leave me optimistic that the system is going to get better soon:
- Neural networks' error rates improve dramatically once they start "getting the hang of it" see 10 Misconceptions about Neural Networks
- They're getting a lot of data. AP does not need to be engaged to collect data. I imagine they're picking sections of road with multiple pass-throughs to feed into the system. The error can easily be calculated because they know what the driver actually did and what the network is producing as an output.
- Some people are having excellent results. I drive back and forth on a traffic jammed highway every day and have had to maybe on average twice a day disengage, almost always for exit ramp following.
But, a complication is that learning how to drive isn't all they're trying to do. If you look carefully at the most recent Tesla video, there are no lane lines! At least none that I can see. If the system is really doing DNN learning, there is no instruction to "find lane lines. stay in them." The car just stays in the lane because that's what human drivers do. When I pay no attention to the dancing lane lines and just look at the path my car is following, it is going pretty straight. Look at the lines and yikes! So, what I'm getting at is that Tesla Vision seems to aim to accomplish some algorithmic logic that requires the identification of things (lane lines, cars, people, street signs) that is very much separate from what is needed to just drive down the road. An example is setting speed based on a speed limit sign. I'm not sure a DNN could ever get enough data to generalize the presence of a 25 MPH sign to the fact that the driver is going 29 mph or whatever. Same for AEB or side collision avoidance.
That being said, the communication was very poor at sales time about the capabilities of the system as it existed then (November when I ordered). I had looked into it, but it took a few conversations to get them to admit that as of then (November 2016) a car coming off the assembly line didn't even have the basic safety features, such as AEB. And, even more poking around to find out that it didn't even have auto headlights! I can't imagine that there aren't at least some really upset owners. The data was there in the materials, but the salespeople spoke with too much confidence about EAP's feature-full imminent delivery.