At the least, the people on the conference call should have asked... "Why are we taking a step back on functionality? What is the impact of removing mobile eye from the mix? How applicable is the 200 million miles of data collected so far to the new platform?"
Both you and Xpert know much more about this system and technology than I do so please correct or ignore as appropriate.
I think Tesla has to start over from scratch and the only really useful data from the previous AP archive is the mapping provided by gps. Based on my limited understanding of the Dave paper by Nvidia it seems the local computer within the car, the Drive PX2 has to learn clues from a much greater number of sensors--the cameras, the car's response by the driver to the images, the radar, and the ultrasound sensors. Much more data than the old system before the hardware upgrade. It's a new ball game.
The Drive PX2 can provide situational experience connected to the driver's responses. Over a very short period of time it can derive enough experience which Tesla can massage, just as did Nvidia before testing the resultant program, and let it drive a car. In the demo of the Dave paper about 100 hours of driving over 100 miles and varied terrain, including traffic, they had enough to demonstrate the concept and successfully try on new and challenging situations. Plus Tesla's new cars have much more sophisticated sensors. Nvidia had three cameras in one direction collecting information, not surround vision.
I haven't the foggiest what the new AP software by Tesla will have to do. Maybe the only thing needed is correlating the sensory learns from each car to the core of another computer and apply deep learning techniques to that data. Or, they need to review the data collected by the newer hardware vehicles in corner conditions where the AP screwed up but the driver did not. I think Tesla has confirmed that is what they are working on now--calibrating the system.
Certainly with two months of MS/MX cars producing data is much more than Nvidia needed for its demonstration so the lack of folding in data gathered before is not a huge problem imo.
There's a lot of speculation here but youse guys and perhaps others can set me straight.
1) Are they really gathering shadow data from all vehicles produced at this time forward? Tesla has said so.
2) Is it true the only useful data from previous iterations of AP is the gps input?
3) Can they use the new radar experience or do they even need to do so if the car is doing deep learning by itself as assumed above? Maybe all that is needed is catching the echoes of the radar as it does now. Processing their meaning may be learned by the Drive PX-2.
4) Do they have to control things afterward to deal with speed limits, etc., and right of way rules? Or can each car learn this too?
5) What about geographic differences? Is it necessary to intervene in the learning process or ensure that right hand drive countries follow different rules?
6) Of course there will also be limits placed by regulating bodies.
7) Et cetera, et cetera, et cetera, as a king of Siam was made to say.
Edit: Apologies to Xpert: I didn't see your most recent post. Also, I may be completely wrong about the capabilities of Drive PX-2 which you can probably correct. If so, then sorry for the noise here.