Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How does fleet learning work?

This site may earn commission on affiliate links.
I would like to understand better how the self driving features work. I know there is a Deep Neural Network whose structure is generally described in available literature, comprised of something like 37,000,000 neurons. And I know that there is a high resolution map that is being continuously improved by all the Testas running AP-2 hardware. But it's hard to tell how these pieces fit together.

Here are some questions for discussion:

1) How are the training weights in the neural net updated? Is that all done back at Tesla, or is there any training done in each car as it is driven? Assuming the training is being done centrally, is the neural net updated only when a new software version is installed, or is it done more often? Does every version update deliver a later version of the neural net?

2) What is the high resolution map like? Presumably it has precise location information for permanent features like road edges, signs, maybe telephone poles, etc. But are there other kinds of information as well? I've read stories from people who say that after driving a particular road a few times, the autopilot does a lot better on that road. Why? Presumably more information about the road itself, but is there also information about driver response? Will another Tesla that drives the same road a week later do better because of the mapping done by the first one?

3) Is the mapping and training information being captured by the fleet using all of the cameras? If so, does that mean that Tesla may have different nets they are training that are using data from different subsets of the sensors?

4) Does anyone know anything about plans to start using more of the sensors.
 
1) How are the training weights in the neural net updated? Is that all done back at Tesla, or is there any training done in each car as it is driven? Assuming the training is being done centrally, is the neural net updated only when a new software version is installed...

Car is not my specialty nor is Autonomous Technology but this is what I understand:

Your car has all the sensors and hardware for Artificial Intelligence. It is capable of learning and reacting locally then transmit its knowledge and experience to the fleet network so that the rest of the fleet don't have to start from scratch and learn a very same location/scenario.

However, since it still new, Tesla Headquarters need to validate that all the learning and simulated or real responses are appropriate before allowing sharing to the rest of the fleet via a firmware update.


2) What is the high resolution map like?...

One example use of High resolution map is when the lanes are all so confusing in a heavy construction zones with different surface finishes, different lane markings, visible old lane markings, visible old scars...

By accessing the fleet knowledge, your car would see that the rest of the fleet all drive straddling long established lanes as provided by legacy map but it knows these are the corrected updated paths because these are temporary relocated lanes as marked by current fleet's High Resolution Map.

I've read stories from people who say that after driving a particular road a few times, the autopilot does a lot better on that road. Why? Presumably more information about the road itself, but is there also information about driver response? Will another Tesla that drives the same road a week later do better because of the mapping done by the first one?

That's an example of AI and Fleet Learning. The car can learn how driver behaves and correct its program on its own and share that to the rest of the fleet so another car does not have to repeat the learning from scratch at that location/scenario.

3) Is the mapping and training information being captured by the fleet using all of the cameras? If so, does that mean that Tesla may have different nets they are training that are using data from different subsets of the sensors?

I have no idea but I don't see why Tesla can't gather data from all cameras, ultrasonic sensors and radar unless there's a bandwidth or the ability to cope with overwhelming data.

4) Does anyone know anything about plans to start using more of the sensors.

Tesla only hints on using cameras, ultrasonic sensors and radar. When asked about using others such as LIDAR, it says "unnecessary".

Tesla has always evolved and has been willing to use better technology so I won't be surprised if Tesla will change or add more sensors/hardware when the time comes.