Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The overwhelming majority of my disengagements on 12.3.6 are personal preference issues (e.g. I want to be in a different turn lane, I don't like how it's stopping, etc.) There is some grey area, too - the disengagement I had earlier today, for example. Would it have caused an accident? Probably not, but it wasn't safe and I didn't want to find out. There are also cases where what it's doing is wrong and/or illegal but may not cause an accident.
I define these personal preference items as interventions. Any disengagement, IMHO, is safety related on some level. I think this is what Musk is really referring to in his X post but he uses the term intervention. I "intervene" all the time with FSD 12.3.6 because it's still way too shy, for example, with following distance and closing the gap when in the passing lane, and many other situations where FSD is just plain way too timid - so I end up hitting the go pedal for encouragement or to avoid people passing on the right because the gap is too large in front of our car for most human drivers - especially those with a more aggressive tendency, coupled with FSD's love of the left lane still, and the inability to execute a timely lane right lane change when prompted via the stalk - at times I'll disengage to make a more timely lane change as well. I don't count these as safety disengagements though, it's more preferential and to be accommodating and respectful to other human drivers around the car. Obviously hitting the go pedal doesn't disengage FSD - but I still consider it an intervention as I'm still having to tell the car to accelerate in a more timely/appropriate manner on a fairly frequent basis. Hopefully this gets better with 12.4 and beyond.

I think it would be useful here on TMC if we differentiated between true safety disengagements vs preferential interventions.
 
Time will tell if this is real but the first ~3 mins were a bit painful to watch. It remains slow and indecisive. Later it performs a lane changed in an intersection. And made a lane change over a solid white line in the left turn lanes.

On the positive, max set speed is improved, accel/decel profile appears slightly more gentle, and appropriately stops closer to cross walk lines.
They show their version in each video and there's obvious differences. People really think YT influencers are using CGI or something to trick that they have 12.4?

It's 12.4...
 
True for many states but not all. And so it's not the safest choice especially when FSD isn't tailored for all jurisdictions.
Anecdotal evidence would suggest that FSDS is trained on California traffic laws, and will operate that way in all states. Doubtful we'll see regional training models for different state laws for some time.
 
Anecdotal evidence would suggest that FSDS is trained on California traffic laws, and will operate that way in all states. Doubtful we'll see regional training models for different state laws for some time.
In the previous hand coded version, regional differences can be hand coded in. Now with all NNs, it's going to be a lot more complex to add it in.
 
In the previous hand coded version, regional differences can be hand coded in. Now with all NNs, it's going to be a lot more complex to add it in.

I think it's the opposite, actually.

Writing and maintaining different sets of behavior in hand-written code is a task of enormous time and complexity. Moving to an end-to-end network turns that complexity problem into a scale problem. If there are traffic signs or lane configurations that only occur in one state, or even one locality with one state, being able to navigate them like a native human driver from that location is only a matter of sampling enough training data that shows humans successfully and lawfully navigating it.

The nice part about end-to-end is that Tesla doesn't even need to be aware of these region-specific driving laws. As long as enough correct behavior is sampled in their training set, the model will learn to do the same. And if the traffic law is something that cannot be intuited from surrounding contextual visual information, how are out-of-state drivers expected to follow it? We don't read the statutes of every state we pass through; there's typically a sign or markings that indicate the new behavior is expected, or you do as the drivers around you are doing.
 
I think it's the opposite, actually.

Writing and maintaining different sets of behavior in hand-written code is a task of enormous time and complexity. Moving to an end-to-end network turns that complexity problem into a scale problem. If there are traffic signs or lane configurations that only occur in one state, or even one locality with one state, being able to navigate them like a native human driver from that location is only a matter of sampling enough training data that shows humans successfully and lawfully navigating it.

The nice part about end-to-end is that Tesla doesn't even need to be aware of these region-specific driving laws. As long as enough correct behavior is sampled in their training set, the model will learn to do the same. And if the traffic law is something that cannot be intuited from surrounding contextual visual information, how are out-of-state drivers expected to follow it? We don't read the statutes of every state we pass through; there's typically a sign or markings that indicate the new behavior is expected, or you do as the drivers around you are doing.
I think that's accurate to an extent. It seems like there FSD would require some programming or at least programming to read specific traffic signs based on conditions or times. These are all different formats, but some say things like no right turn at night, during the hours of x to x, etc. I don't think just processing driver behavior on a global level will ever satisfy those specialized requirements.
 
HW3 may be CPU limited, but with HW4 and future chips it will be easier to break apart the NNs into modules, so training regional traffic laws will be easier, as it won't affect the other trained nets.
That's no longer end-to-end if they don't affect other nets. Even in the modular version, to be "end-to-end" the end output affects every module upstream. An approach that is not end-to-end means modules can be 100% independent (completely ignoring results that happen after the output of the given module).
 
Anecdotal evidence would suggest that FSDS is trained on California traffic laws, and will operate that way in all states. Doubtful we'll see regional training models for different state laws for some time.
They have mentioned that they are aware that laws are different in some places, and that they will have to localize those kind of things. (I think they mentioned crossing a solid line in China is very bad.)

I wonder how they will handle state-to-state differences. Customized per state? Train it on what is allowable in all states? Not worry about it because L4 systems can't currently be given a ticket? :p
 
I think it's the opposite, actually.

Writing and maintaining different sets of behavior in hand-written code is a task of enormous time and complexity. Moving to an end-to-end network turns that complexity problem into a scale problem. If there are traffic signs or lane configurations that only occur in one state, or even one locality with one state, being able to navigate them like a native human driver from that location is only a matter of sampling enough training data that shows humans successfully and lawfully navigating it.

The nice part about end-to-end is that Tesla doesn't even need to be aware of these region-specific driving laws. As long as enough correct behavior is sampled in their training set, the model will learn to do the same. And if the traffic law is something that cannot be intuited from surrounding contextual visual information, how are out-of-state drivers expected to follow it? We don't read the statutes of every state we pass through; there's typically a sign or markings that indicate the new behavior is expected, or you do as the drivers around you are doing.
While I agree with the general idea of NN training, in this case it's going to be difficult. Current NNs, including LLM, require samples in the millions before getting it right. If a small town or county passes a traffic law that's different from the state, there just wouldn't be enough samples to train. There would need to be some guardrails outside the NN, or a NN that can be fed simulations just for traffic laws, while leaving the primary driving NN alone.
 
That's no longer end-to-end if they don't affect other nets. Even in the modular version, to be "end-to-end" the end output affects every module upstream. An approach that is not end-to-end means modules can be 100% independent (completely ignoring results that happen after the output of the given module).
If we want to take your interpretation of "end-to-end", then you're correct - Tesla would need to remove the term and call it something else. But honestly, who cares what it's called. Does it work? If it does, it can be called "Magic Fairy Dust" for all I care. :)
 
  • Like
Reactions: JB47394
We don't read the statutes of every state we pass through; there's typically a sign or markings that indicate the new behavior is expected, or you do as the drivers around you are doing.
I'm not sure that is true. For states where you can't change lanes in an intersection or across a solid white line, do they really have a sign up at every intersection and solid white line stating that? (Or at every entrance to the state.)

I know in my driving I have never seen driving laws/rules posted upon entering a state/city. (Other than something like Oregon has signs up that insurance is required, but nothing about actual driving behavior.) What laws/rules would be considered the default that wouldn't require putting signs up?
 
Last edited:
  • Like
Reactions: FSDtester#1
I'm not sure that is true. For states where you can't change lanes in an intersection or across a solid white line, do they really have a sign up at every intersection and solid white line stating that?
More importantly, I've read that several large cities are going to ban right turn on red. But I'm not reading that they're going to put up "No turn on red" signs at every intersection in the city, it's just going to be something the drivers have to know.
 
More importantly, I've read that several large cities are going to ban right turn on red. But I'm not reading that they're going to put up "No turn on red" signs at every intersection in the city, it's just going to be something the drivers have to know.
I recall seeing some cities where there were No Right Turn on Red between xx to yy hours etc signs. this will also be a challenge for FSD to track
 
  • Like
Reactions: Dewg
More importantly, I've read that several large cities are going to ban right turn on red. But I'm not reading that they're going to put up "No turn on red" signs at every intersection in the city, it's just going to be something the drivers have to know.

Something like that could be encoded in the dynamic map metadata. I think Tesla is still using it for things like no-right-on-red.

I think of the map metadata and the navigation directions like the "prompt" of FSD. So it doesn't know navigation directions intuitively, but its behavior is conditioned on the metadata.