Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Phil Duan answered a CVPR23 E2EAD question about FSD 12 end-to-end driving by pointing out that Occupancy Network has already been converted to a foundation/world model, but the application of it isn't ready to show yet.


He also later mentioned a few times about the tremendous effort of scaling up then optimizing down models to fit vehicle compute. Presumably these larger foundation models are already running on Tesla supercomputers to show off video demos that don't need to be real-time inference? These larger models are also useful for other offline usage such as improved autolabeling.
 
Basically what he is saying is that they will have a foundation model that compresses the data of 8 videos+imu+map+etc into a state vector. This vector+previous vectors can then be used as an input to run a neural network for setting the control output. Likely they have some autolabel process to find the optimal control output and also score incorrect control outputs and then they will train a neural network to find this control policy.

Then they will truely be end2end.
 
1688038150947.png


Kyle: you got served
Tesla:
 
Worth posting again in this thread. Anyone have a more challenging FSD drive than this 1 mi stretch of construction?

We encountered a serious construction zone with temporary traffic lights, single lane, dirt road, and yet FSD made it through all unassisted. Blown away! (Edit: ~1 mi long, one lane only, on the wrong side of the road!)

There was one spot it stopped for almost 10 sec as if pondering. Very tricky terrain, signs all over, then proceeded on its own. AT NIGHT!

I actually considered disengaging FSD after CLEARING IT ALL, just so I could congratulate the Tesla FSD Team in the comments.

Photo is of the beginning of the construction. Most impressive challenge yet!
 

Attachments

  • IMG_5199.jpeg
    IMG_5199.jpeg
    160.7 KB · Views: 36
The latest wide release of 2023.20.7 including FSD Beta 11.3.6 has been made available to over 50% of TeslaFi/Teslascope including those in regions that can't actually activate it. However, it seems like Tesla might be newly (re-?)prompting owners to opt-in to data sharing such as those in the UK probably to run FSD Beta in the background to get it ready for actual activation:

img_7251-jpeg.954280


Allow Autopilot Analytics & Improvements
We are working hard to improve safety features and make self-driving a reality for you. You can help Tesla in this effort by sharing data that will be used as part of fleet learning to continuously improve Autopilot.​
This data is anonymous - it is not linked to your account or VIN, and includes for example: external camera data, trip data, map location and vehicle state data. For added protection, we apply privacy preserving methods such as precision blurring to the external camera data you share.​
 
  • Like
Reactions: Dikkie Dik
The latest wide release of 2023.20.7 including FSD Beta 11.3.6 has been made available to over 50% of TeslaFi/Teslascope including those in regions that can't actually activate it. However, it seems like Tesla might be newly (re-?)prompting owners to opt-in to data sharing such as those in the UK probably to run FSD Beta in the background to get it ready for actual activation:

img_7251-jpeg.954280


Allow Autopilot Analytics & Improvements
We are working hard to improve safety features and make self-driving a reality for you. You can help Tesla in this effort by sharing data that will be used as part of fleet learning to continuously improve Autopilot.​
This data is anonymous - it is not linked to your account or VIN, and includes for example: external camera data, trip data, map location and vehicle state data. For added protection, we apply privacy preserving methods such as precision blurring to the external camera data you share.​

You know what. I never thought about how big of an advantage in training data Tesla really has. Sure it has a lot more cars on the road to collect data, but it also has tons of drivers providing live feedback, something other self-driving systems don’t have. For machine learning models, labeling is very resource-intensive and critical. This user feedback should really speed up the development.
 
You know what. I never thought about how big of an advantage in training data Tesla really has. Sure it has a lot more cars on the road to collect data, but it also has tons of drivers providing live feedback, something other self-driving systems don’t have. For machine learning models, labeling is very resource-intensive and critical. This user feedback should really speed up the development.

I swear Tesla is also conducting live A/B testing with those of us on v11.4.4. Some days, one given route can be flawless; and on other days, the same route requires a couple of interventions.

Even if Tesla throws away the context of the verbal reports from my disengagements, they have a full dataset of which vehicle parameters and behaviors are more likely to lead to disengagements.
 
  • Like
Reactions: Hiline