Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Also, regarding the processing of multiple camera feeds at the same. If the training is taken from human driving examples, and humans cannot see in all directions simultaneously, does the behavior demonstrated in the driving footage reflect knowledge of all cameras at all times?

Yes, by way of a concept called "survivorship bias". All the human left turns are recording using all the cameras, does not matter that the human can't look at all the data. Videos that end in a collision or near miss are excluded (all cameras are tied together as a set for a composite video, so there is no tossing part of a set and keeping the rest, if it's a crash or near miss you toss all camera data related to that crash or near miss*). The ones that remain are the survivor videos and those are the primary videos used to train the AI.

So the AI will use data from more than one camera, based on video from more than one camera, based on a driver that can only see one way at a time.

* When I say toss you mark it as a collision or near miss and don't use it for the training set you are trying to achieve. You might be able to use that video for some other task, it just won't be in the normal data set of behavior to reproduce.
 
Last edited:
Yes, by way of a concept called "survivorship bias". All the human left turns are recording using all the cameras, does not matter that the human can't look at all the data. Videos that end in a collision or near miss are excluded (all cameras are tied together as a set for a composite video, so there is no tossing part of a set and keeping the rest, if it's a crash or near miss you toss all camera data related to that crash or near miss*). The ones that remain are the survivor videos and those are the primary videos used to train the AI.

So the AI will use data from more than one camera, based on video from more than one camera, based on a driver that can only see one way at a time.

* When I say toss you mark it as a collision or near miss and don't use it for the training set you are trying to achieve. You might be able to use that video for some other task, it just won't be in the normal data set of behavior to reproduce.
True. And likewise I imagine they exclude clips that are not necessarily catastrophic but less than ideal. For example a driver passing up a safe opportunity to advance through an intersection. I'm sure they've closely evaluated any clip that made it into the training set. Though they probably have an automated method of filtering out the majority of clips that are likely unsuitable, since they can't watch every clip that gets uploaded to the mothership.
 
  • Funny
Reactions: AlanSubie4Life
On the tangent topic of superhuman performance, I was thinking about how nice it would be for the car to have front bumper cameras for situations where there is limited cross traffic visibility. For example this scenario where Chuck gets stuck @7:15.


The catch 22 is that for an L2 system, even if the car sees that it's clear to advance, if the driver can't verify, the driver would have to disengage anyway.
 
  • Informative
Reactions: JB47394
If there’s nobody creeping up behind you, you should find that it generally will stay in the left lane. When someone comes up behind you and there’s nobody in front of you, it should get over. (This is how I usually drive).

Highway logic isn’t using the new neural net stuff yet.
Right, understood and that’s worked for me. But it should move to the right when I use the indicator stalk - sometimes it will but others it won’t no matter how many times I try, then I disengage.
 
  • Like
Reactions: sleepydoc
She claims this is a v12.4 test drive. It doesn't appear much different than v12.3.6. Still slow approach and indecisive at stop signs. At ~3:00 the driver steps on the accel pedal from a stop sign and again later at a traffic light. Auto max speed still looks problematic. Turn signals still appear to be initiated early. Possibly smoother decel in other scenarios?

 
On the tangent topic of superhuman performance, I was thinking about how nice it would be for the car to have front bumper cameras for situations where there is limited cross traffic visibility. For example this scenario where Chuck gets stuck @7:15.

The catch 22 is that for an L2 system, even if the car sees that it's clear to advance, if the driver can't verify, the driver would have to disengage anyway.
I suppose the car could get around this by moving slowly at first to let the driver see that the coast is clear, then completing the maneuver. But the more important reason is to reduce cases where the car thinks the coast is clear, but is wrong.

E.g. if the 8-camera setup can see just far enough to detect potentially dangerous cars traveling at the speed limit, it might assume the coast is clear but miss a speeding driver who's slightly farther away. If bumper cameras could better see such drivers from typical intersection stopping points, that would provide extra safety while avoiding the need for the car to creep dangerously far into the intersection. We do want the car to eventually have superhuman safety, after all!
 
Last edited:
  • Like
Reactions: cyborgLIS
She claims this is a v12.4 test drive. It doesn't appear much different than v12.3.6. Still slow approach and indecisive at stop signs. At ~3:00 the driver steps on the accel pedal from a stop sign and again later at a traffic light. Auto max speed still looks problematic. Turn signals still appear to be initiated early. Possibly smoother decel in other scenarios?

Used her prestidigitation skills to sneak it away from Elon? Wouldn’t be hard; that guy’s mind is extraordinarily susceptible to suggestion.
 
True. And likewise I imagine they exclude clips that are not necessarily catastrophic but less than ideal. For example a driver passing up a safe opportunity to advance through an intersection. I'm sure they've closely evaluated any clip that made it into the training set. Though they probably have an automated method of filtering out the majority of clips that are likely unsuitable, since they can't watch every clip that gets uploaded to the mothership.
The default for any given video clip is not to be used in training; the overwhelming majority are not.

99.9% of fleet-captured video is never sent to the mothership. The remaining 0.1% is sent either on selective request (if Tesla requests that the fleet upload videos of a particular unusual situation they're trying to gather data for), or perhaps in cases of collision or near-collision.

Of the clips Tesla receives, perhaps 0.1% of those might be chosen for training, and even these are very selectively cherry-picked to be particularly good examples of skilled human driving.

So out of a million miles driven by the fleet, perhaps only one mile is used for training. The fleet has cumulatively driven about 200 billion miles at this point, so that works out to about 200,000 miles used for training. At 30mph for average city-street driving, that's about 2 million twelve-second clips. Of course this is supplemented by vast amounts of synthetic training data.

These guesstimates may be an order of magnitude or two off, but you get the idea. Every one of the training clips will show excellent driving; there shouldn't be any clips that include a driver hesitating at an intersection when they have the safe right of way, and so on.
 
I doubt it's 12.4. It doesn't have the new visualization screen, no map on upper right corner.
Those are features of 24.14.x and not FSDS features. But it is still NOT 12.4 without proof. We will see 12.4 show up on TeslaFi when some of the wider group of employees get it and this will be at least a week and an update (12.4.1) before anyone in the general public gets it.

EDIT: I don't use My Space anymore (even before Elon scooped it up for such a bargain price 🤬) so I can't see replies.
 
Right, understood and that’s worked for me. But it should move to the right when I use the indicator stalk - sometimes it will but others it won’t no matter how many times I try, then I disengage.
Yes this is a new issue with v12. Indicating a lane change manually isn’t consistent.

On city streets, sometimes it will change the lane. Sometimes it will waffle and go back, and sometimes it will just cancel the change altogether.

I think that will be fixed in 12.4.

I don’t recall having issues doing lane changes on highways. I think if the v11 stack is active (Auto speed is not active) it has changed lanes consistently for me.
 
  • Like
Reactions: FSDtester#1
 
She claims this is a v12.4 test drive. It doesn't appear much different than v12.3.6. Still slow approach and indecisive at stop signs. At ~3:00 the driver steps on the accel pedal from a stop sign and again later at a traffic light. Auto max speed still looks problematic. Turn signals still appear to be initiated early. Possibly smoother decel in other scenarios?

I've noticed that turn signals in 12.3.6 are noticeably worse than they were in previous V12 releases. Often it will not signal at all in turn lanes. (granted this is like half the drivers on the road...)
 
I did a bunch of driving yesterday and got to compare HW3 in my Model 3 on 2024.14.6 and HW4 in the Model Y with 2024.3.25
Both with FSD 12.3.6, the Model 3 is on its 30 day trial. I got to drive >100 miles I each car on a good mix of highway and regular roads.
I'd forgotten how much I miss the accuracy of the USS when parking, so much nicer that the hand waving of the cameras.
The two cars were much closer than they were with 11.4.9.
The Y really did seem to spot traffic lights and other cars much sooner than the 3, but overall the difference is much smaller than it was with 11.4.9 and earlier.
Both versions displayed the same issues, I was curious to see if there was a difference. Still ignores speed limit signs over 60 if the road isn't divided, still has the fetish with the left lane.
Not sure if I'm disappointed or not with the lack of difference, if they're that close what was the point in upgrading all the hardware?
I remember reading that musky said that FSD was targeting HW3 and that HW4 was running HW3 emulation - but that was probably at the time when he was wanting to stop people waiting for HW4 cars :rolleyes:
[edit]
Obviously HW4 does improve visible camera quality and clarity on the display, but I was looking for FSD differences
 
Extraordinary claims REQUIRE proof.
TL;DW and boooooooring
I claim I have 12.5 already. Meaningless (and stipulated a lie or mistake) without showing the Software screen and backing up with a screen shot of the App.
Agreed, I wouldn't trust you unless you flew me to your palace and showed me in person...then would be dubious...