Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I thought Tesla's whole premise is that they have fleet data and can test things by running shadow mode within the fleet.
I suspect a good chunk of 12.x training data comes from situations where 11.x required disengagements, and this could potentially explain some 12.x issues so far worse than basic 11.x behaviors as the "easy" stuff was under-represented. This means 12.x would be able to learn when/how to do some advanced behaviors that 11.x had trouble with.

Similarly, 12.x disengagements haven't really been in the training data because it hadn't been deployed with 12.x actively attempting situations that require disengagement. Without this 12.x disengagement data, it might have trouble understanding when those learned advanced behaviors are inappropriate to do as it only had positive signals from 11.x disengagement data. This could explain 12.2.1's desire to cross double-yellow that require disengagement for an inappropriate attempt, and maybe for unprotected left turns, the signal for when it shouldn't attempt is similarly not strong enough yet. Tesla is probably focusing data collection and training on this type of scenario because disengaging 12.x for a wrong unprotected left attempt could be too dangerous for average drivers.
 
I suspect a good chunk of 12.x training data comes from situations where 11.x required disengagements,
I don't think they can use the data from the disengagement drives for training, as obviously that isn't a five-star level drive to train against. (Maybe they can salvage some of it?) It does give them data on what they need to add to their triggers to try to catch a five-star drive in that situation. The point being that I think if there is a situation that you always end up disengaging for, you need to disengage FSDb before you get there and drive through it "perfectly" yourself to hopefully give them the good drive data they need to use for training.
 
  • Like
Reactions: johnm and sleepydoc
The point being that I think if there is a situation that you always end up disengaging for, you need to disengage FSDb before you get there and drive through it "perfectly" yourself to hopefully give them the good drive data they need to use for training.
A JIT defensive disengagement at a particular location, along with repeated anticipatory disengagement and five-star driving at that location, could be useful for collaborative/fusion training.
 
Last edited:
  • Like
Reactions: MP3Mike
But Chuck insulted the KING 👑 and he is going to suffer. The KING will spare NO EXPENSE to humiliate anyone who dares question anything about the KING. You have been forewarned.
The king can't stop chuck from testing his UPL on v12. As soon as it begins to roll out in FL, without Chuck, he will jump on Twitter looking to "borrow a ride" and there will be a line of Teslas outside his house later that day.
The only difference being that now Chuck is going to nit pick every little issue and tell what he really thinks of FSD.
 
I don't think they can use the data from the disengagement drives for training, as obviously that isn't a five-star level drive to train against. (Maybe they can salvage some of it?)
If Tesla is training with both towards good and away from bad examples, it should be relatively straightforward to label and train "this portion is good" and "this portion is bad." Or even if only just the good parts, video leading up to that section will be part of the history context and earlier bad controls would be skipped. If the desire is to train end-to-end on human driving, it would seem natural to not train control outputs with the portions that have Autopilot active.

Triggers can be set up to capture video of varying duration some customizable time after a disengagement or other condition, so somebody disengaging 11.x ahead of needing to quickly switch multiple lanes after a turn might not need a special trigger. Sure, the timing could be off and not get the full clip of a "perfect" example, but if any of it's still good enough, those portions can still be used for training, and hopefully across the full fleet, there should be enough partial and full examples.
 
  • Like
Reactions: flutas
If Tesla is training with both towards good and away from bad examples, it should be relatively straightforward to label and train "this portion is good" and "this portion is bad." Or even if only just the good parts, video leading up to that section will be part of the history context and earlier bad controls would be skipped. If the desire is to train end-to-end on human driving, it would seem natural to not train control outputs with the portions that have Autopilot active.

Triggers can be set up to capture video of varying duration some customizable time after a disengagement or other condition, so somebody disengaging 11.x ahead of needing to quickly switch multiple lanes after a turn might not need a special trigger. Sure, the timing could be off and not get the full clip of a "perfect" example, but if any of it's still good enough, those portions can still be used for training, and hopefully across the full fleet, there should be enough partial and full examples.
I'm curious how they are labeling - my guess is a human still makes the decision if a clip is good behavior vs bad behavior.

If that's true, what happens if a human makes a mistake and submits a bad behavior into the training system as a good behavior?
 
May be just chance, but I have gotten the "Traffic control feature may be degraded" warning a number of times with V12, and I never saw it before. Note that it's appropriate, with a bright sun in the sky where traffic lights would be (but aren't).
There has been no rhyme or reason to my traffic control feature degraded message. Most of the times I have gotten it the sun has been behind me.
 
There are only 2.6 million miles of paved road in the US. If it costs $5 a mile to pay someone to drive that’s only $13 million to which is chump change for Tesla. Drive it all 100 times and it’s still not crazy so it seems totally scalable.


What’s interesting about this clip is you can see the that the planned path output matches the swerve into the pole. It seems like that part of the visualization is quite accurate still even with end to end. But it overlaps the non drivable space visualization so that must not be what the end to end NN “saw”
 
There are only 2.6 million miles of paved road in the US. If it costs $5 a mile to pay someone to drive that’s only $13 million to which is chump change for Tesla. Drive it all 100 times and it’s still not crazy so it seems totally scalable.


What’s interesting about this clip is you can see the that the planned path output matches the swerve into the pole. It seems like that part of the visualization is quite accurate still even with end to end. But it overlaps the non drivable space visualization so that must not be what the end to end NN “saw”
This happens frequently to me with V11. (a few times a week)
Usually if nobody is near me and if I'm familiar with the corner/posts I let FSD correct itself. I used to disengage and let Tesla know but I figure nobody at Tesla pays attention to anything V11.
 
Well that sucks big time! I hadn’t heard it before but I routinely use TACC when I’m in situations that FSD can’t handle of when I just don’t have the patience to deal with it. Unless v12 truly achieves 0 interventions they need to bring TACC back.
Ah! After parking for an hour or so, FSD is suddenly back. The mysteries of Tesla will never end.
And to answer your question, I had a driving score of 100 so I've been using FSD since the release of 10.2.
Perhaps that and the fact I've never had a strike, along with living in California, contributed to my receiving it now.
I've never had an early release like this since the original 10.2.
Several releases ago there was a bug that made the car think a door was open/ajar which would disable FSD/TACC. They fixed it but it could be back.
 
Sending out / hiring drivers to produce V12 training data is not scalable and negates the whole point of a large fleet.
It makes perfect sense if there's a specific scenario on which they wish to train. A question I have is whether they can identify problem areas and record video of drivers driving correctly in leu of test drivers.
I don't think they can use the data from the disengagement drives for training
The problem with disengagements is by definition, something went wrong causing the disengagement so they would be ruled out for training data.
 
There are only 2.6 million miles of paved road in the US. If it costs $5 a mile to pay someone to drive that’s only $13 million to which is chump change for Tesla. Drive it all 100 times and it’s still not crazy so it seems totally scalable.


What’s interesting about this clip is you can see the that the planned path output matches the swerve into the pole. It seems like that part of the visualization is quite accurate still even with end to end. But it overlaps the non drivable space visualization so that must not be what the end to end NN “saw”
I have a turn where it does something similar on my way home from work. The route planer behaves correctly but FSD spazzes out in the middle of the turn and the tentacle starts flashing between different options forcing me to disconnect. It's not at all clear why it does this since the streets are clear and the map and routing information is correct.
 
So has V12 been disabled for all who have it?
Hasn't been disabled for me. Also attached is screen shot showing that double pull autopilot activation is disabled for v12 fsd.
 

Attachments

  • 2024-03-06 21.00.36.jpg
    2024-03-06 21.00.36.jpg
    480.7 KB · Views: 13