Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Fleet learning with AP 2.0: changing brains

This site may earn commission on affiliate links.

Cobbler

Paranoid T.E.S.L.A Bull
Sep 22, 2015
730
10,109
België
Over the past weeks, I've learned more and more about the concept of AP 2.0 and how the 'AI' in the car will try to learn itself how to drive based on the sensor inputs of the car.

The NVIDIA system, based on the DAVE2, was able to succesfully steer the car after a period of trial-and-error.
How Our Deep Learning Tech Taught a Car to Drive | NVIDIA Blog
The dataset created, works for that setup only. If this dataset would've transfered to a different setup with a slightly different camera, steering behaviour, ... the setup would've to recalibrate/relearn itself to get the system working again.
It's like saying: my brain has been optimised to work with my body characteristics. If my brain is transplanted in a different body that has bad eyes or less muscles, I would have trouble getting to control everything and it would take a learning process to adapt before desired control of the body is possible.

Now to Tesla and the AP2.0 setup
I assume that every setup has some slightly different setups among each car they are building:
- on the camera setup, angle setups
- steering behaviour (wheel size?)
- deterioration after a certain period of use so that the car behaves different after a while
(caused by dirty sensors, mechanics wear,....)

Is it possible that each car is going to develop a unique dataset to succesfully autosteer and that they are difficult to interchange with other cars?
Or is the learning system taking a different approach?
 
Over the past weeks, I've learned more and more about the concept of AP 2.0 and how the 'AI' in the car will try to learn itself how to drive based on the sensor inputs of the car.

The NVIDIA system, based on the DAVE2, was able to succesfully steer the car after a period of trial-and-error.
How Our Deep Learning Tech Taught a Car to Drive | NVIDIA Blog
The dataset created, works for that setup only. If this dataset would've transfered to a different setup with a slightly different camera, steering behaviour, ... the setup would've to recalibrate/relearn itself to get the system working again.
It's like saying: my brain has been optimised to work with my body characteristics. If my brain is transplanted in a different body that has bad eyes or less muscles, I would have trouble getting to control everything and it would take a learning process to adapt before desired control of the body is possible.

Now to Tesla and the AP2.0 setup
I assume that every setup has some slightly different setups among each car they are building:
- on the camera setup, angle setups
- steering behaviour (wheel size?)
- deterioration after a certain period of use so that the car behaves different after a while
(caused by dirty sensors, mechanics wear,....)

Is it possible that each car is going to develop a unique dataset to succesfully autosteer and that they are difficult to interchange with other cars?
Or is the learning system taking a different approach?
Most systems that work with neural networks have a camera calibration step which creates some values to normalize the input to a standard.... therefore allowing you to use the same model on all vehicles (in a region of the world). The calibration values might be unique, but nothing else has to be.
 
  • Like
Reactions: croman and cdub
If my brain is transplanted in a different body that has bad eyes or less muscles, I would have trouble getting to control everything and it would take a learning process to adapt before desired control of the body is possible.

Ah, what you need is an Eymorg Controller. Your brain won't even notice the difference.

s-l300.jpg