Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is my Model 3 Autopilot learning?

This site may earn commission on affiliate links.
My Tesla seems to have gotten better, then worse at a specific section of roAd near my house.

there is a residential rd which has clear road markings which stop at a t-junction. At the t junction the road turns away from the side road.

At first the Tesla would want to drive right toward a light pole (eg continue straight) and that junction. Then for a while it would make the turn correctly. Now it’s back to aiming at the pole.

Meh.
 
It's possible the car receives (silent) data updates between software updates,


No, it's not.

The entire blob has a hash for security reasons, you can't change "part" of it.

Green covered that too fairly recently-

Greentheonly said:
they must update entire firmware at least on the autopilot. Can't change a single file - breaks dm-verity. Can't overlay and have it survive a reboot (no dev overlay hooks in prod firmwares)



The only time behavior could change without an explicit firmware update from Tesla that you know about and approve to install (and that changes the listed version) is if they push a map update (which you can also see in versioning).

Updated maps can change behavior because of better info in the maps (ie corrected speed limits, or type of road, or lane data, etc), not because your car "learned" anything.



That said- there's a myriad of reasons "how the car handled X" can change WITHOUT ANY CHANGE to the software.

For example the amount/direction of light can change how confident the system is in what it sees.

So can the amount of other vehicles.

Or the weather.

Or the angle of approach to something.

Or how dirty a camera might've been.

Or your speed (we saw a GREAT example of this when a guy posted video claiming FSD beta had "learned" how to handle a U-turn, but when you actually look closely you notice the "failures" were like 15-20 mph higher approach speed than the later "success".)

Or a slew of other environmental differences.

Humans will often not notice (or care) about minor differences in conditions and just assume OMG MY CAR LEARNED!

No. It didn't.
 
Last edited:
If it's a mapping related issue, it's possible for AP to "learn" it or correct itself in the future. I'm fairly confident Tesla will be rolling out its own cloud maps soon. I'm not sure if something like this is current deployed (unlikely).
 
I assure everyone here that there is 100% no way the networks are being updated dynamically, on-the-fly at each individual car. I work in this field (computer vision, object detection and ML) and there are many reasons for this.

1. The cars simply do not have the hardware to train the networks involved for AP/FSD. Compute requirements for inference are far less than what is required for training the various large networks. Training occurs on large clusters of GPUs in data centers, and you cannot simply swap training over to "edge" compute devices that are built for model inference.

2. Tesla trains their networks with ridiculous amounts of data. The way network training works at the moment... You can't just update models with only a few samples of new data without having all that old data around to also present to the network. If you try to do the former, the network will quickly "forget" what it learned from all the old and varied data and will simply memorize the new data without generalizing well to new situations. There is no way cars can even store all that data for training their models, let alone utilize it for training.

3. There is a huge liability and risk component to trying to let each car update their models by themselves in an unsupervised manner even if they could somehow train models on-the-fly (which they can't). Your entire fleet of Tesla's will rapidly have divergent behavior and it would be a nightmare to manage

4. Andrej Karpathy (who I respect immensely) has a few talks where he goes into details about how they do ML at their scale and how they have huge sets of "unit tests" for their AP/FSD models that they test all newly trained models against to make sure performance improves with no regressions on existing test scenarios. Over time they keep building this out to have a larger and more diverse evaluation set to monitor evolution of metrics for their models as they are trained and improved.

5. In the same talk I mentioned above, Andrej talks about the extremely impressive software infrastructure they have built up at Tesla so each car in the fleet can easily provide useful new instances of training data to their central servers where they can be QC'd by humans, labeled and then rolled into training of new models (on their server farms) to further improve their models.

I could probably list a few more things, but as someone who has been doing this stuff for a living for a while now, I assure you that the cars (and the field of ML) at this point in time cannot support on-the-fly learning at each car.