Over the past weeks, I've learned more and more about the concept of AP 2.0 and how the 'AI' in the car will try to learn itself how to drive based on the sensor inputs of the car. The NVIDIA system, based on the DAVE2, was able to succesfully steer the car after a period of trial-and-error. How Our Deep Learning Tech Taught a Car to Drive | NVIDIA Blog The dataset created, works for that setup only. If this dataset would've transfered to a different setup with a slightly different camera, steering behaviour, ... the setup would've to recalibrate/relearn itself to get the system working again. It's like saying: my brain has been optimised to work with my body characteristics. If my brain is transplanted in a different body that has bad eyes or less muscles, I would have trouble getting to control everything and it would take a learning process to adapt before desired control of the body is possible. Now to Tesla and the AP2.0 setup I assume that every setup has some slightly different setups among each car they are building: - on the camera setup, angle setups - steering behaviour (wheel size?) - deterioration after a certain period of use so that the car behaves different after a while (caused by dirty sensors, mechanics wear,....) Is it possible that each car is going to develop a unique dataset to succesfully autosteer and that they are difficult to interchange with other cars? Or is the learning system taking a different approach?