Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Not sure who you are referencing BUT a 2017 S has induction motors and induction can't regen below about 5MPH.
I understand that. That was my point. The FSD software may be written to accommodate Both versions of motors using the mechanical brake at the end (as Alan likes to call out) on purpose so one code can support both eras.
 
Slow down , speed up, slow down, speed up.
😁That's exactly what it did when a light turned yellow on me yesterday. It was like there were 2 little people in the computer fighting over whether it should stop or go. Made me think of those cartoons where there's an angel whispering on one shoulder and a devil on the other.

To be fair, speed and distance were such that either decision would have been reasonable.
 
This era car will regen as normal down to 2-3 MPH then basically free roll until you apply the brake to stop and hold
Yeah there may be differences. I use hold mode.
Both versions of motors using the mechanical brake at the end (as Alan likes to call out) on purpose so one code can support both eras.
Sure. I would actually like it to use the brakes at the end. Kind of not what it does at the moment.

Is it possible what you often note regarding early stop and mechanical brake are buffers they put in place to have a unified software to accommodate several versions of motors etc?
Seems unlikely to explain observed behavior.

Elon has never seemingly had a grasp on versioning
Or reality for that matter.

Maybe it's just his wishful thinking, which as he says is a very difficult trap to avoid (especially for him it seems).
 
Last edited:
And another thing -- on a ULT, 12.3.6 was headed for a roadway that was under construction.

I was coming out of the driveway from the south. All traffic on the formerly 3-lane road has been diverted to the newly added eastbound lanes, and the old roadway has been demolished for construction of westbound lanes. Tesla was aiming between the cones to the demolished lanes until I forced a hard left. I noticed that on that stretch of the road, my location was consistently south of the road on the map.

1716331439256.png
 
Yeah this Bull $hit method of including the UI version before the software version needs to be changed or abandoned. All it does is cause confusion and offers NO insight. So weird that Elon doesn't get something that is so F'n obvious.
How in the world do you know Elon doesn't get it?
 
Yeah this Bull $hit method of including the UI version before the software version needs to be changed or abandoned. All it does is cause confusion and offers NO insight. So weird that Elon doesn't get something that is so F'n obvious.
actually I think it provides lots of insight, but it depends on perspective and what the various versioning systems mean.
The raw software version provides pretty accurate timeline of the major build levels and is instantly recognizable - Year.week.major.minor
The UI version is the exactly what it implies, the macro level UI that the software will use for display.
FSD versioning is independent of but reliant upon the base software, so still needs its own versioning.
Getting rid of the different versioning would offer even less insight without simplifying anything.
For example, without UI versioning the difference between V11 and V12 would be 2024.8.9 and 2024.14.3, but the whole interface looks different. There's nothing in the software versioning to highlight that.
Same for losing FSD versioning but worse because FSD is based on different software builds and not time based.
For some its really obvious why the different parts of the system have different versions.
 
Damn Ive been having some complex great urban drives the last few days. I swear it seems I've gotten at least some Delta update action improving 12.3.9. Or maybe Elon slipped me some 12.4 love'n in a shadow mode. 🫣 🤣 🤣 Today I even had it make 2 Yellow lights that in the past 12.3.x would have been a hard braking/stop event. Yesterday I went 7 miles up Peachtree St/Rd from Midtown to Buckhead and back in heavy traffic and not a single disengagement. My only disengagement in the last few days was today on GA Tech campus on a moderate traffic street it tried to get in the Right turn lane needed to go straight. But it is now killing it in the city and seems to have gotten so much better. In fact it is so good now I'm not "dying" for 12.4.x. At lest until others start getting it and FOMO kicks my s$$.
@AlanSubie4Life
 
actually I think it provides lots of insight, but it depends on perspective and what the various versioning systems mean.
The raw software version provides pretty accurate timeline of the major build levels and is instantly recognizable - Year.week.major.minor
The UI version is the exactly what it implies, the macro level UI that the software will use for display.
FSD versioning is independent of but reliant upon the base software, so still needs its own versioning.
Getting rid of the different versioning would offer even less insight without simplifying anything.
For example, without UI versioning the difference between V11 and V12 would be 2024.8.9 and 2024.14.3, but the whole interface looks different. There's nothing in the software versioning to highlight that.
Same for losing FSD versioning but worse because FSD is based on different software builds and not time based.
For some its really obvious why the different parts of the system have different versions.
sorry for quoting my own post @JulienW but I remembered the map version as well 🤓 🤓 🤓
But the edit window had closed 🤷‍♂️
 
This is a good example of how poor navigation information may be affecting FSD. As you can see in the picture I need to turn left and then turn right in 400 feet, yet the navigation says I can use either of the turn lanes. If FSD is given no instructions on which turn lane to use it may pick the wrong one and be unable to make the next turn. at best it’s setting itself up for a difficult maneuver.
IMG_4621.jpeg
 
This is a good example of how poor navigation information may be affecting FSD. As you can see in the picture I need to turn left and then turn right in 400 feet, yet the navigation says I can use either of the turn lanes. If FSD is given no instructions on which turn lane to use it may pick the wrong one and be unable to make the next turn. at best it’s setting itself up for a difficult maneuver.
View attachment 1049441
You cracked the case!
 
Confirmation of v12.4 rolling out to employees:

interestingly enough, I’ve been using autosteer and tacc again, and I had zero nags — not even flashing blue, when driving normally. However, when I looked at my phone, it nagged me with a “boop” after a second. I guess that’s a good balance. It seems Tesla has already been moving toward this since the awful nagging builds around the holidays.
 
This is a good example of how poor navigation information may be affecting FSD. As you can see in the picture I need to turn left and then turn right in 400 feet, yet the navigation says I can use either of the turn lanes. If FSD is given no instructions on which turn lane to use it may pick the wrong one and be unable to make the next turn. at best it’s setting itself up for a difficult maneuver.
View attachment 1049441
FSD doesn't appear to have a macro planning function. By this I mean it doesn't map out the entire optimal route (including all lane changes and turns) at the start. It could be because the nav data and nav system isn't advanced enough for that, but it could also be they simply never bothered to add that function.
 
The current visualization does "fuzz out" in areas it can't perceive; it generally won't show an empty road if it doesn't know it's empty. (At least that's been my experience; granted it still has some flicker/instability, especially at long range, and is far from perfect at detecting everything.)
It seems to do that fuzz out effect at the edges of the screen, but will not indicate impaired visibility within the inner portion of the screen. For example in this clip @29:20, notice how the car waiting in the left half of the intersecting road pops into existence. The visualization confidently renders the area around the stopline as vacant until it gains proper visibility by moving forward. A similar problem exists pulling out of parking spots. If the visualization indicated areas of impaired visibility, it would become safer to use as a tool. As it stands, it can't really be trusted.


I'm sure they are pre-screened by algorithm, but even 2 million short clips is few enough that I'm pretty sure they're individually human-reviewed. The consequences of a bad example (or a few) slipping through and becoming training examples could cause outsize problems with the neural network, and potentially legal problems too. Imagine if a clip of a car running over a pet makes it into the training set, and later someone sues Tesla when a car on FSD runs over their pet, and then the bad training video comes out in litigation!
I started to think some bad clips may have passed into the NN after experiencing FSD braking in response to a traffic light turning yellow, followed by a sudden "decision" to instead accelerate through the light. It's strikingly similar to the hesitancy of a human driver. But maybe it's not the result of specific training clips and rather some other technical issue.

Edit: Sorry for the late reply to this topic btw.
 
Last edited:
  • Like
Reactions: Ben W
It seems to do that fuzz out effect at the edges of the screen, but will not indicate impaired visibility within the inner portion of the screen. For example in this clip @29:20, notice how the car waiting in the left half of the intersecting road pops into existence. The visualization confidently renders the area around the stopline as vacant until it gains proper visibility by moving forward. A similar problem exists pulling out of parking spots. If the visualization indicated areas of impaired visibility, it would become safer to use as a tool. As it stands, it can't really be trusted.
It does have "object permanence", which cuts both ways. If it sees a car passing behind another car, it will project/extrapolate its path even while its view is blocked. The flip side is that if it sees an empty patch of street (as it does around 29:12 in your video), it will assume (on the visualizer) that it stays empty, even while the view of it is temporarily blocked. It's not clear what the "right thing to do" is as far as how to visualize an "object-permanence" situation (or in this case, lack-of-object permanence). The human brain runs into the same issue at e.g. a magic show, where a ball appears under a cup that you could have sworn was empty.
I started to think some bad clips may have passed into the NN after experiencing FSD braking in response to a traffic light turning yellow, followed by a sudden "decision" to instead accelerate through the light. It's strikingly similar to the hesitancy of a human driver. Maybe it's not the result of specific training clips and rather some other technical issue.
I don't think this is an issue with the training clips; rather, it's a temporal instability issue, like when the steering wheel used to wiggle wildly around during a turn, as the computer recomputed a new solution every tenth of a second. The network architecture needs to be adjusted to ensure better temporal smoothness, even if this means sticking with a prior decision that now seems slightly sub-optimal. (Once it makes a decision to brake for a yellow light, it should have a strong bias to stick with that safer decision, even if it determines a second later that it could still speed through the light.) Obviously this is only true if the initial decision is the safer one; if it initially decides it CAN make it through the light, then a second later realizes it can't, then of course it should brake. (And stick with THAT braking decision!)
 
FSD doesn't appear to have a macro planning function. By this I mean it doesn't map out the entire optimal route (including all lane changes and turns) at the start. It could be because the nav data and nav system isn't advanced enough for that, but it could also be they simply never bothered to add that function.
I always assumed the macro planning was done by the navigation system. That’s what it’s designed to do and at that level it shouldn’t change for FSD. I can’t think of any reason you’d want to duplicate that work within FSD when you already have the functionality available.
 
I always assumed the macro planning was done by the navigation system. That’s what it’s designed to do and at that level it shouldn’t change for FSD. I can’t think of any reason you’d want to duplicate that work within FSD when you already have the functionality available.
For situations like a dogleg turn, it has to be a hybrid approach. The right thing for the neural network to do at any given moment depends on both the video input and on the navigation goals (particularly if the navigation goals involve an upcoming turn or merge), and so the network will have to be explicitly trained on such situations (combinations of video inputs and navigation goals) in order to handle them all smoothly.
 
  • Like
Reactions: rlsd and JB47394
I always assumed the macro planning was done by the navigation system. That’s what it’s designed to do and at that level it shouldn’t change for FSD. I can’t think of any reason you’d want to duplicate that work within FSD when you already have the functionality available.
But the navigation system doesn't appear to have lane planning at all. For example, when I drive with the nav, it doesn't tell me in well in advance which one is the exact optimal lane. Instead, I have to be closer to the junction before it shows a standard lane hint for that turn or junction (the one that shows the lane layout for the junction specifically), which as you point out in your own example, can offer lanes that are "incorrect" if you don't want to have to make last minute lane changes at the next turn or junction.

Thus FSD needs to have it's own macro lane planning function if you want to avoid situations like that. Without that, it's going to drive like a driver relying on lane hints at every turn/junction and not making a wholistic lane plan for the entire trip.
 
Last edited:
  • Like
Reactions: JB47394
This is a good example of how poor navigation information may be affecting FSD. As you can see in the picture I need to turn left and then turn right in 400 feet, yet the navigation says I can use either of the turn lanes. If FSD is given no instructions on which turn lane to use it may pick the wrong one and be unable to make the next turn. at best it’s setting itself up for a difficult maneuver.
View attachment 1049441
This is my experience: If there are 2 close freeway entrances/exits on the right side of the path, FSD sometimes incorrectly takes the closer entrance/exit instead of taking the farther one for destination.