Elon Musk has often said that the hardest problem is perception, and once you solve that the rest is easy. I think most of us would agree that deciding what to do is actually the hard part and it seems like we've hit that point now even for Autopilot and NOA. Here's a recent example (which I've also experienced). Notice the display shows the line is there, yet the car swerves over it due to bouncing off the right lane marking as the merge lane closes. Go to time 13:45 Navigate On Autopilot also comes to mind, I think a lot of the short comings come down to really dumb driving algorithms. 1. The code for making lane changes leading up to an exit appears to be as simple as for every mile away the exit is, move one lane closer to the exit lane. Each lane change is initiated based on this simple timer approach, completely ignoring information about cars next to you, how busy the traffic is etc. Decisions like slowing down or speeding up to pick a gap seem to be beyond it. 2. NOA relies on map data to a fault, if the map says there are 3 lanes on the road but the vision system only sees 2 (because of construction), the car still does dumb things because it thinks there are 3 lanes. 3. In my mind, the reliance on static map data downloaded once a month for navigation will never work very well, they need to be able to stream the map data from the cloud and constantly update it with in car vision and telemetry. Why is Tesla's driving lagging so far behind their vision system now and what can they do to fix it? Tesla recently implemented a deep NN called deeprain for making the wipers smarter, perhaps we need a deepLaneChange NN to teach the car how to minimize bad lane change decisions.