It is worth noting that, in theory, the neural net method can improve vastly. Elon already mentioned the idea of making better decisions around semis. Since they know where the car is, you can build assumptions like "pass quickly" or stay to one side. That part is not hard at all.. I have a programming background...its very possible... and I think they will get there. AP seems to be, "Stay alive long enough to see it get amazing".
Hmm. I'm confused. You're describing adding explicit rules to the system, but Tesla keeps talking about using a neural network, which is not rules-based. It's not clear to me when they say "training the neural net" what's actually meant there. There would seem to be two tasks - space detection and control.
With the cameras, there is clearly a CNN-type algorithm running to perform the surrounding space detection task. You can see from those pictures where the computer is identifying lane lines, other cars and their properties, etc. This is obviously an important task, and you can see from the CAPTCHAs that are like "click on the boxes with crosswalks" that lots of people are working on it. But usually you use humans to tag those pictures - unclear to me how individual drivers are helping "train the neural network" for this task. (I believe This is what LIDAR really helps with, too - identifying objects in 3-D space).
The next step is the control task. You've identified objects, now you have to make decisions. This is what good drivers are great at - look 12 seconds ahead, anticipate the actions of other cars, always leave an out, etc. I'd assume this is the really hard part. I assume Tesla would want to do this with some sort of NN-based architecture, probably using reinforcement learning. In other words, they'd probably want to avoid programming in explicit rules unless absolutely necessary (in static systems, rules can work OK, but in massive spaces where tons of decisions have to be made using probabilities, rules-based systems are difficult to scale). A really good RL-based system would essentially learn that passing semi trucks is a dangerous endeavor and should be done quickly, rather than requiring Tesla to program in that explicit rule. But everyone in the industry knows that RL systems are really, really, really, really finicky and difficult to make work well even for simple tasks.
Right now AP look pretty rules-based - I noticed for instance that the middle-of-the-lane seeking in autosteer seems a lot like simple PID controller behavior (the explicit rule is - stay directly in the middle of the two lines), or in the NoA lane-change behavior people describe - something like "wait until there is X distance between the two cars in the adjacent lane before changing."
I guess what I'm saying is, it's not clear to me what the strategy is for AP. They seem to be picking off individual tasks, which can be done with point solutions (maybe rules-based or using something more complex), but I'm not sure how those tasks relate to the overall goal of real self-driving.
(To be clear, I am not a data scientist working on autonomous driving - likely obvious to anyone technical reading this. I manage data scientists, but work in a completely different space. If there are any engineers here working on this task, I would love to know what is actually going on! Help please!)