Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Really incoming FSD 12.1 ?

This site may earn commission on affiliate links.
I think in the end there will be plenty of C++ collaborating with the NN stuff. Likely it is actually true that the NN stuff can give a much more "human-like" graceful response in the great majority of scenarios than they ever got to with the old approach. But I'll bet a lot that the NN stuff, by itself, will have plenty of edge cases where it does unacceptable things, and that they won't find it practical to train all of those out just by using a surfeit of "good driving in that scenario" examples for more NN training. So, if they have not already, I think they'll need to learn how to combine a "fixup" blanket of code that detects and corrects behavior in pretty specific edge cases while leaving the great bulk of the driving to the NN.

That may be very hard to do, and sounds unattractive compared to the siren song of "nothing but NN" but I suspect it is the real world.
 
  • Like
Reactions: dtdtdt
Every version will always be BETA until "real sensors" are added to supplement the 1440p cameras being used.
TESLA FSD 12 HARDWARE KIT.jpg
 
With MSFT, the idea was that peons would be more nerdy, more likely to report bugs and test extensively.

Also, the last thing AP manager wants to do is to send it out to his peers for testing !
you dont think at Tesla we have two dynamics missing in the big legacy auto leadership?
@tesla everyone is an engineer and everyone is rowing in the same direction
remember Farley was disgraced by a mile long wiring harness, 300 pounds of copper wire and his managers never pointed that out to him
now they are copying Tesla and should be on the road to better

1703611413241.png
 
  • Informative
Reactions: JB47394
I think in the end there will be plenty of C++ collaborating with the NN stuff. Likely it is actually true that the NN stuff can give a much more "human-like" graceful response in the great majority of scenarios than they ever got to with the old approach. But I'll bet a lot that the NN stuff, by itself, will have plenty of edge cases where it does unacceptable things, and that they won't find it practical to train all of those out just by using a surfeit of "good driving in that scenario" examples for more NN training. So, if they have not already, I think they'll need to learn how to combine a "fixup" blanket of code that detects and corrects behavior in pretty specific edge cases while leaving the great bulk of the driving to the NN.

That may be very hard to do, and sounds unattractive compared to the siren song of "nothing but NN" but I suspect it is the real world.

I agree with you, but there's the likelihood that some of the hard-coded rules & classical robotics code will move from the endpoint to the backend training system and used to score/filter examples in the training set before sending them to nnet training.

There will also have to be significant synthetic generation of training examples exhibiting the correct behaviors---enough to overwhelm bad behaviors in human example driving.
 
Did he misspeak, and meant 1.6 meters longer than it needed to be - or is the Mach-E actually using kilometers long wiring in the car?
I don’t think he misspoke. Tesla has long been working on reducing how many kilometers of wiring is used in vehicle production:


The Cybertruck took that efficiency effort even further.
 
So we're just a few weeks away from public release then?

Really curious how V12 will work outside of California. Wouldn't surprise me if the model is overiftted to clips from Cali and is a vast improvement for users there but users else where will see a degredation initially.
I think Tesla is quite aware of oversampling from California-supplied clips. Given they have the ability to easily source clips from anywhere, I don’t see why they would oversample from California.
 
So we're just a few weeks away from public release then?

Really curious how V12 will work outside of California. Wouldn't surprise me if the model is overiftted to clips from Cali and is a vast improvement for users there but users else where will see a degredation initially.
Two weeks, of course!

It's actually better than expected that this expansion is apparently the same version that rolled out to the previous 3000 employees. However, it would be premature to anticipate any rollout to customers soon. Much could go wrong, and there could be multiple employee releases required.

We also don't know how long the full cycle time is between releases. It took about 5 weeks between the initial employee release and the last one. And that appears to have required a new version, from V12 to V12.1. If each revision requires 5 weeks of data collection, training clip selection and retraining, we could be waiting some time yet.

I'll keep my expectations at April and hope that I am pleasantly surprised.
 
Two weeks, of course!....
Don't you remember Elon touting V11 "One Stack to Rule Them All" with highway, ASS, Full Parking Lot and Auto Parking. It only took like 3 or 4 days and it was released to everyone.........or was it 300 to 400 days and still no ASS. 🤪

My betting money would be on late spring to early summer and I still would limit my bet to be safe.
 
there's the likelihood that some of the hard-coded rules & classical robotics code will move from the endpoint to the backend training system and used to score/filter examples in the training set before sending them to nnet training.

There will also have to be significant synthetic generation of training examples exhibiting the correct behaviors---enough to overwhelm bad behaviors in human example driving.

This!

An end-to-end AI reframes the development of FSD from "hand write code to drive the car" to "write code to source quality driving examples from the fleet that meet certain criteria". Bet it's a fascinating time to work at Tesla right now.

They may even be able to push that scoring down to the car. For instance, there lots of cases where a human driver is obviously driving poorly and no training clips should be sourced from them. Things like excessively close following distance, not using blinkers, unnecessary aggressive acceleration or braking, high lateral g corners, drifting out of lane, excessive speeding, etc. The car already has the onboard capability to measure some of these for safety score!

Your second point about synthetic generation for training examples is interesting. This technique could be particularly useful cases where either:

- humans perform poorly
- the situation is extremely rare

Barely averted collisions (AEB type situations) would fit both.