Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I think it's the opposite, actually.

Writing and maintaining different sets of behavior in hand-written code is a task of enormous time and complexity. Moving to an end-to-end network turns that complexity problem into a scale problem. If there are traffic signs or lane configurations that only occur in one state, or even one locality with one state, being able to navigate them like a native human driver from that location is only a matter of sampling enough training data that shows humans successfully and lawfully navigating it.

The nice part about end-to-end is that Tesla doesn't even need to be aware of these region-specific driving laws. As long as enough correct behavior is sampled in their training set, the model will learn to do the same. And if the traffic law is something that cannot be intuited from surrounding contextual visual information, how are out-of-state drivers expected to follow it? We don't read the statutes of every state we pass through; there's typically a sign or markings that indicate the new behavior is expected, or you do as the drivers around you are doing.
The out of state drivers simply don't follow it! As others pointed out, I don't buy it is practical to figure out region specific road laws just from observing driving. One factor not mentioned is even in-state drivers may not necessarily follow the law!

That means to do so with training you have to explicitly prune out those examples (which I imagine will involve a human) and somehow train it to only follow those for those states affected. I don't see how this would be easier than for example in a hand coded rule in the lane change function where they add more conditions where the car shouldn't do so (in this example if in states where not allowed, being in an intersection or needing to cross a solid white line is excluded).

The other approach is to do lowest common denominator, which is to do training that satisfies all states. However, Tesla doesn't seem to be willing to do that at the moment, instead allowing it to be heavily California biased.

The issue here is semi-autonomous and autonomous vehicles are held to different standards. Because it is technically possible to program out all illegal behavior, regulators will expect systems to do so. The prime example is the California Rolling Stop. If trained based on road behavior alone, it is quite natural for the car to do rolling stops, especially when no one else is in the intersection.
 
  • Like
Reactions: sleepydoc
Anecdotal evidence would suggest that FSDS is trained on California traffic laws, and will operate that way in all states. Doubtful we'll see regional training models for different state laws for some time.
Maybe yes or maybe no. In Mass a fairly new law requires cars to be at least 4' from cyclists. Whether it's a coincidence or not I don't know but FSD definitely gives more space to cyclists now.
 
While I agree with the general idea of NN training, in this case it's going to be difficult. Current NNs, including LLM, require samples in the millions before getting it right. If a small town or county passes a traffic law that's different from the state, there just wouldn't be enough samples to train. There would need to be some guardrails outside the NN, or a NN that can be fed simulations just for traffic laws, while leaving the primary driving NN alone.
Do we know for certain that fsd is using llm? I thought that was just for generative AI and fsd wouldn't be considered gemerative AI.

Correct me if I'm wrong.
 
Anecdotal evidence would suggest that FSDS is trained on California traffic laws, and will operate that way in all states. Doubtful we'll see regional training models for different state laws for some time.
Ashok tangentially addressed this in the most recent quarterly conference call. In particular he was referring to training FSD differently for China. He actually made it sound like it wasn't that difficult to adjust for regional differences.

So you may be right that state differences won't be accounted for now (they'll probably use the most conservative--or consider the differences small enough that they don't care)--but it sounds like having NN models for jurisdictions around where the car is operating wouldn't be too difficult to implement.

The problem with assuming local laws are known to drivers is that so many drivers are from different states or cities. You can't have differing rules from city to city and just expect drivers to know the nuanced differences depending on the city they're in.
 
I am just of the opinion that L3 or higher cannot be done with current hardware. I don’t think they will do L3 with current hardware. Even traffic jam assist (though that seems possible, maybe, sometimes (and certainly not with current software), I just don’t think it is worth bothering with (for Tesla - it would be pretty nice for many owners)).

L3 is certainly possible. It just depends how L3 is defined. The scope of L3 is determined by the car company so just like Mercedes came up with a ridiculously defined L3 scope (highway only with other restrictive caveats) Tesla can do the same and define what conditions warrant a hand off to the driver. Clearly it could be much more robust than the Mercedes solution. Whether what Tesla could define would achieve market success is a different question but I suspect a reasonably defined L3 scope could drive considerable adaptation of FSD. Just not robotaxi level.
 
L3 is certainly possible. It just depends how L3 is defined. The scope of L3 is determined by the car company so just like Mercedes came up with a ridiculously defined L3 scope (highway only with other restrictive caveats) Tesla can do the same and define what conditions warrant a hand off to the driver. Clearly it could be much more robust than the Mercedes solution. Whether what Tesla could define would achieve market success is a different question but I suspect a reasonably defined L3 scope could drive considerable adaptation of FSD. Just not robotaxi level.
I suspect Tesla would only settle for an extremely wide L3 ODD (almost all highway and city) or just stick with L2 improving and L4 "by the end of the year".
 
L3 is certainly possible. It just depends how L3 is defined.
It has to be reliable enough for Tesla to take liability. Can it result in no accidents 99.99% of the time (so on the order of 10 years between failures) when used as a tightly restricted Traffic Jam Assist? Probably! But that doesn’t mean it “can be done.”

I very much doubt Tesla would want to get anywhere near that.

My bet is that it will not happen with current hardware. I don’t think the reliability required to take on liability is ever going to get there. What would be the point for Tesla, in the event they can get there with better hardware, and sell that to customers?
 
  • Like
Reactions: Ben W
L3 is certainly possible. It just depends how L3 is defined. The scope of L3 is determined by the car company so just like Mercedes came up with a ridiculously defined L3 scope (highway only with other restrictive caveats) Tesla can do the same and define what conditions warrant a hand off to the driver. Clearly it could be much more robust than the Mercedes solution. Whether what Tesla could define would achieve market success is a different question but I suspect a reasonably defined L3 scope could drive considerable adaptation of FSD. Just not robotaxi level.
I have doubts a limited L3 necessarily will drive more adoption than the current wide city L2, nor do the manufacturers necessarily even want wide adoption of L3 (given they shoulder extra liability for it). Right now all the L3 systems out there are in extremely low volume and that is partly by design, but also partly because most people feel the restrictions make it far less attractive.
 
I think it's entirely possible that by v12.[56].x, Tesla could start offering L3 under a limited-but-useful controlled-access highway ODD with some carve-outs for takeover in exceptional cases (e.g. police direct traffic around an accident scene). We'll see! They may never choose to call anything L3 or L4 along the way anyways, and instead just keep progressively reducing nags and attention requirements in these easier "could've been L3" scenarios until it becomes effectively-L3.
 
Do we know for certain that fsd is using llm? I thought that was just for generative AI and fsd wouldn't be considered gemerative AI.

Correct me if I'm wrong.
No, it's not a LLM, as far as I know. Not sure where you heard that it was. If you're referring to my comment, I said "including LLM", meaning that Tesla is training the NN off millions of videos, like a LLM, not that it IS a LLM. :)
 
L3 is certainly possible. It just depends how L3 is defined. The scope of L3 is determined by the car company so just like Mercedes came up with a ridiculously defined L3 scope (highway only with other restrictive caveats) Tesla can do the same and define what conditions warrant a hand off to the driver. Clearly it could be much more robust than the Mercedes solution. Whether what Tesla could define would achieve market success is a different question but I suspect a reasonably defined L3 scope could drive considerable adaptation of FSD. Just not robotaxi level.
Off-topic, but can anyone confirm that Mercedes L3 is even in use in the US yet? I've read they are allowed in California and Nevada, but I can find NO evidence of it being used here at all. The only videos of actual owners using it that I can find are in Europe. In the US, it's all demos for various magazines and tech publishers.
 
Off-topic, but can anyone confirm that Mercedes L3 is even in use in the US yet? I've read they are allowed in California and Nevada, but I can find NO evidence of it being used here at all. The only videos of actual owners using it that I can find are in Europe. In the US, it's all demos for various magazines and tech publishers.

When I checked back in April, I couldn't find a single example either of a owner using it in the US. From my search, the hardware only hit some EQSs at the end of last year, but no timeline for when the software will be ready. The software subscription costs $2500 a year.