Is anyone worried like me that the rewrite will take months to bed in and debug? Presumably it cannot be a full rewrite or they would have to junk all the accumulated training by the neural networks. Why do people think the transition will be seamless?
In terms of ML (machine learning) models, training is always thrown away between new models. The way ML works to train a model is a load of input is provided to the system, and it then goes through a very fancy random number generator, finally there's an output. The output is then checked to ensure it identified the input correctly. For example if you put a photo of '3' into the model, did '3' come out? If it didn't the numbers in the middle are adjusted in a somewhat intelligent, yet also somewhat random way until the model starts to consistently say '3'.
Once this process is complete, the model is 'trained' and will be deployed out in the field to categorise real things. What will happen is a different input (photo) will be encountered and a '3' is interpreted as a '4' and so the model will need to be re-trained. This means basically starting from the beginning again, with some tweaks to both the training data (pre-defined input with known desired outputs) as well as how the numbers are calculated and adjusted in the middle.
The problem with Tesla right now, is due to hardware constaints (HW2/2.5) they can't process enough data and models to do it 'properly'. So the system looks, approximately every 9ms, and almost entirely just by the front main camera, at the whole scene in front of the car. It then makes all judgement calls about if there's a hazard in front, if the car needs to steer left or right, accelerate, etc. Then 9ms later it looks again completely fresh and decides again what to do.
Obviously looking at frames individually means it'll be both entirely reactive, and not smooth at all. So what Tesla have done is write loads and loads of code on top of the decisions / categorisations it makes to smooth them out. So 'emergency brake' becomes 'slow down the car with vigor' and 'hard left' becomes 'turn the wheel to the left more'.
The next version on the other hand is doing things how they should have been done to start with. First it looks at all the cameras so it has a
mostly 360 degree view. This means things won't suddenly disappear / appear in the field of view causing panic, except for when camera view is blocked by weather. Next, the model is no longer looking at each frame fresh, but it looks at the
progression of things. This means if an object suddenly appears it can see the movement of the object and whether it'll cross the car's path or is instead moving away or just fixed with the environment.
The other change is Tesla's internal development is significantly more robust than it was with AP2. AP2 to me looks like it was rushed because of a falling out with Mobileye. AP3 has a proper test suite where the inputs (video/radar) are played back and the system is verified that it does what it's meant to do.
Now, the end result is that yes, there will be some issues still sadly. New stuff is always less predictable than old stuff. But going forward after the release you shouldn't get anywhere near the number of regressions that you get now and it should just slowly improve as the edge cases are tested for explicitly. Of course in my opinion they still won't really get there without stereoscopic cameras and lidar but I'm not an expert. Also not providing basic cruise and adaptive cruise is shocking. The software is buggy, provide a basic fallback.