Some posts indicate that the newly introduced Auto Pilot "1.0" is using crowd data to improve itself. I'd admit that a lot of articles discussing this topic addresss the same posts at teslamotorsclub and repeating the story doesn't make it more probable. I'm sure that the owners reporting the seemingly self learning behaviour feel that their car is "definitely" being better at driving itself - but can it really be the case? I've been working within the AI field and I'd like to briefly go through some of the facts in the case of the Auto Pilot and similar systems: Fundamentally, learning requires a feed back mechanism as to know whether a given behaviour should be changed. In other words, just driving around doesn't learn the car a lot. In the case of the Tesla Autopilot the most sure thing for corrective action is when the Autopilot demands the user to take over. In this case, the moments before and after contains crucial data for learning via the sensor suite. Among the questions to be asked are: What happened before the incident? For instance, did the car make a manouvre thanks to the Auto Pilot that was not feasible? What was the conditions on the road, quality of signs etc? What did the driver do to correct the error? For instance, braking, speeding, lane shifting, turning etc? What was the outcome moments after the correction? An accident, continued driving etc? In addition to this, camera and sensor data along with speed and GPS data and not to forget local driving regulations are essential to understand what really happened and what was the expected and wanted behavior. The above feedback data must then be send to Tesla and organised in order to be able to indentify similar situations from other drivers from the data pool. This is not a tedious task as the individual situations must be excatly recognized and categorised. Now, the data must be analyzed. Essentially, this requires manpower but could in some cases be done automatically. The analysis can have several outcomes, lets go though the most abvious: 1) The situation can not be clearly understood maybe because of lack of sufficient sensor data 2) The situation is too difficult to handle with the existing sensor/programming ambition 3) The situation can be handled (better) with updated programming or data input. As for the latter, new or updated software is neccessary to correct the behaviour of the Auto Pliot - not just "sharing" a lot of data amongst the Autopilot enabled Teslas around the world. To be useful the changes must of course be uploaded to the individual cars and can be done on the regular software update basis with the users accept. And this is important, as the user then will be informed of important changes of the Auto Pilot. IF Tesla just updated the cars (even with minor parameter adjustments) without informing the owners properly nobody would be sure how the Autopilot would react to any situations on a day by day basis - and that could be a dangerous road to drive so to speak. In summary, people using the Autopilot and sharing the data with Tesla gives the company an invaluable source for making the Auto Pilot still better within the limitations of sensors and computing power. However, this can only be done by Tesla after excessing processing of the recieved driving data -- it is not just happening by sharing data among the cars themselves. In addition, any changes to the car's (software) behaviour should to the best of my knowledge only be done in connection with the regularly announced software updates with the driver's conscent.