Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How does fsd beta get smarter?

This site may earn commission on affiliate links.
I'm sure this has been explained before, but there are so many long threads that I thought I'd just straight up ask for a clear explanation.

My understanding is that when you file a bug on the non-beta fsd Tesla, Tesla never reviews this information, and thus, the car's ability to navigate on Autopilot doesn't improve. It's never been clear to me why Tesla would incorporate a bug filing feature but not use that function to improve the system. Given that there are an infinite number of situations a car can be thrust into, how do they decide what fixes to make with each iteration of Autopilot? For example, I've seen at least a dozen updates since I've owned my model 3, but it still can't go down a windy road without a driver taking over. I've filed many bugs and told Tesla tech support, but nothing changed.

Now I'm seeing the FSD beta videos, and it appears people are filing bug reports and Tesla is putting out very quick updates, fixing very big issues in a matter of days. Even with a very small number of beta testers, there must be thousands of driver bug reports coming in. How do they decide what to fix, and how are they doing it so quickly?

Is the car learning that every time a driver has to take over, that it must've made an error? Is it self correcting so that the next time a driver goes to the same spot it doesn't repeat the mistake?

Sorry if this is a stupid question, but it just seems like Tesla was ignoring basic issues like going down curvy roads, and yet now they're able to take and implement feedback immediately. People keep mentioning the neural network, but I don't understand how that neural network works (I assume it's doing what I mentioned above - using driver engagement as a sign that its decision was faulty.)

And then I've seen mentions of Dojo. What is Dojo and what does it have to do with the car learning to drive better?

Can anyone give a simple explanation?
 

I think this video does a good job at explaining how it works.

this one is also good:

Thanks, I just watched both of these. First off, it's amazing that there are people intelligent enough to code like this. It is beyond my mental capacity.

I will say that I'm still confused about AP's learning capabilities. In the videos, he mentions that the cars can predict scenarios based on data sets, but of course, its predictions can be inaccurate because there are an infinite number of nuanced situations. But then he talks about how a car can - for example - fail to recognize an occluded stop sign. And if it does, Tesla can ask the fleet to look for a number of these instances. Then they train the fleet to recognize occluded stop signs. This implies - and he discusses at the end of the longer video - that when a driver disengaged Autopilot, Tesla knows the car didn't act as planned. My confusion is that he mentions the AI can make fixes on the fly without an update, but that's not my experience. And if that were true, they wouldn't need to constantly push new updates to improve performance.

Why, for example, does the car always make the same mistakes on the very curvy Los Angeles road, Laurel Canyon? It regularly crosses the double line into oncoming traffic, drives at high speeds toward the mountain walls, etc. Given that LA has many Teslas, and this road is very well traveled, why hasn't the AI learned to stop acting so erratically? Anyone who engages Autopilot on this road would constantly be taking over, which means Tesla knows this is happening, as does the car's AI. If this can be flagged and fixed on the fly, why hasn't it been? It feels like the car isn't learning anything.

Hopefully these questions make sense...
 
Why, for example, does the car always make the same mistakes on the very curvy Los Angeles road, Laurel Canyon? It regularly crosses the double line into oncoming traffic, drives at high speeds toward the mountain walls, etc. Given that LA has many Teslas, and this road is very well traveled, why hasn't the AI learned to stop acting so erratically? Anyone who engages Autopilot on this road would constantly be taking over, which means Tesla knows this is happening, as does the car's AI. If this can be flagged and fixed on the fly, why hasn't it been? It feels like the car isn't learning anything.

The AI in the car doesn't learn. All the AI training happens offline at Tesla, who then send the results down as part of the software update process. That's why the car always makes the same mistake in the same place when you drive. There are AI systems out there that learn, but these are generally located in data centers since the learning process is very processor intensive (think banks of servers).

And in fact, its not really a good idea to let the car learn all the time even if it could be done, since it can also learn bad things. For example, if the car simply learned by observing human drivers, it would always drive at +10mph over the speed limit and speed up when it saw a turn signal changing to yellow :)
 
The AI in the car doesn't learn. All the AI training happens offline at Tesla, who then send the results down as part of the software update process. That's why the car always makes the same mistake in the same place when you drive. There are AI systems out there that learn, but these are generally located in data centers since the learning process is very processor intensive (think banks of servers).

And in fact, its not really a good idea to let the car learn all the time even if it could be done, since it can also learn bad things. For example, if the car simply learned by observing human drivers, it would always drive at +10mph over the speed limit and speed up when it saw a turn signal changing to yellow :)

That makes sense. But even if it's happening offline, why hasn't it been fixed despite what I can only assume is tens of thousands of data points on that exact road. And I experience the same behavior on all canyon roads. And I imagine that means any Tesla on earth trying to traverse a windy canyon also has this issue. But given that LA is a major city, and there is so much data, wouldn't it be simple to eradicate this problem?
 
Can anyone give a simple explanation?
Tesla has software that fixed your curvy road issue but not in your version. Tesla has been focused on getting FSD software ready for wide release, so they haven't really been updating existing Autopilot especially for non-highway uses -- the main exception is the traffic light feature, which was to collect data for the FSD software.
 
That makes sense. But even if it's happening offline, why hasn't it been fixed despite what I can only assume is tens of thousands of data points on that exact road. And I experience the same behavior on all canyon roads. And I imagine that means any Tesla on earth trying to traverse a windy canyon also has this issue. But given that LA is a major city, and there is so much data, wouldn't it be simple to eradicate this problem?

Because despite rumors to the contrary, very few of the AP incidents you see are reported back to Tesla. Yes, they see a representative number, but probably in single digit percentage. And because they have been focused on a major redesign of the AP foundations for 18+ months, which means interim fixes have been minimal at best.

But mostly, because AP is designed to work on divided highways, not canyon roads .. as the manual states quite clearly.

However, the good news is that the FSD beta is the first public reveal for that 18+ months of work, and it is explicitly designed to work on the kind of roads you are talking about.
 
Tesla has software that fixed your curvy road issue but not in your version. Tesla has been focused on getting FSD software ready for wide release, so they haven't really been updating existing Autopilot especially for non-highway uses -- the main exception is the traffic light feature, which was to collect data for the FSD software.

How do you know they have a fix for curvy roads but haven't released it?
 
Because despite rumors to the contrary, very few of the AP incidents you see are reported back to Tesla. Yes, they see a representative number, but probably in single digit percentage. And because they have been focused on a major redesign of the AP foundations for 18+ months, which means interim fixes have been minimal at best.

But mostly, because AP is designed to work on divided highways, not canyon roads .. as the manual states quite clearly.

However, the good news is that the FSD beta is the first public reveal for that 18+ months of work, and it is explicitly designed to work on the kind of roads you are talking about.

That's good to know. But unfortunately it sounds like it could be months or longer before we see FSD beta, especially in municipalities who will question having a fleet of cars beta testing autonomy. My guess is we won't see the FSD beta til summer of next year in Los Angeles.
 
That's good to know. But unfortunately it sounds like it could be months or longer before we see FSD beta, especially in municipalities who will question having a fleet of cars beta testing autonomy. My guess is we won't see the FSD beta til summer of next year in Los Angeles.

“We have, hopefully, a wide release by the end of this year."
-Elon, on the Q3 conference call

The usual Elon-time disclaimer applies, of course, but they haven’t been taking that long to go from beta to release. Smart Summon took about 6 months, IIRC.
 
How do you know they have a fix for curvy roads but haven't released it?
Here's a couple examples of FSD beta making sharp turns. This first one is just following the road with a suggested yellow speed of 15mph and the car slowed down to 11mph to complete the turn (perhaps also slowing down for glare/shadows and poorly marked lines): Tesla FSD BETA handling curvy & bumpy back road @ 0:20

Another video shows FSD beta slowing down to 10mph for a sharper-than-90° right turn at an intersection: Tesla FSD Beta - New version comparison and more - Video #5 @ 5:20

I am guessing FSD beta would be able to take your curvy roads just fine, but from my experience using current Autopilot on curvy roads, these examples are even tougher and the software has no issues in these cases.
 
  • Like
Reactions: Matsayz
“We have, hopefully, a wide release by the end of this year."
-Elon, on the Q3 conference call

The usual Elon-time disclaimer applies, of course, but they haven’t been taking that long to go from beta to release. Smart Summon took about 6 months, IIRC.

If smart summon took six months, then wouldn't fsd, which has a much higher risk, take many more months to release fleetwide. Even on the same timetable as the smart summon, it wouldn't be releases til the end of March. And yet there were supposed to be robotaxis already!
 
If smart summon took six months, then wouldn't fsd, which has a much higher risk, take many more months to release fleetwide. Even on the same timetable as the smart summon, it wouldn't be releases til the end of March. And yet there were supposed to be robotaxis already!

No-one knows how long it will take. The new software isnt incremental, its a complete re-write using a much more sophisticated approach to AI recognition, which will eventually work its way back into NoA and Smart Summon. One reason Smart Summon took so long was the old stack was primitive compared to the new one, which is able to build a much more complete map of its surroundings.
 
No-one knows how long it will take. The new software isnt incremental, its a complete re-write using a much more sophisticated approach to AI recognition, which will eventually work its way back into NoA and Smart Summon. One reason Smart Summon took so long was the old stack was primitive compared to the new one, which is able to build a much more complete map of its surroundings.

Ironically, smart summon took a long time, but still sucks. I have used it once ever. It drives way too slow, stops on the wrong side of the road, and usually pisses off other drivers in a parking lot. It feels like a parlor trick to show friends.
 
Karpathy explained in a talk that Autopilot is actually the combined result of over 100 separate tasks running in real-time and each can be tuned separately or left alone by Tesla in a software update. Even the FSD Beta people could still have a Smart summon that is using the old perception network. Not sure if they will update Smart summon with better perception accuracy or replace it entirely with the FSD behavior... though I'm sure that's an oversimplification and some engineer somewhere is agonizing over the gross details of integrating them.
 
Karpathy explained in a talk that Autopilot is actually the combined result of over 100 separate tasks running in real-time and each can be tuned separately or left alone by Tesla in a software update. Even the FSD Beta people could still have a Smart summon that is using the old perception network. Not sure if they will update Smart summon with better perception accuracy or replace it entirely with the FSD behavior... though I'm sure that's an oversimplification and some engineer somewhere is agonizing over the gross details of integrating them.

Understand that there are many different tiers here. At the bottom is the basic visualization layer(s), that create the world view for the car. Above that are prediction layers that predict behavior, and above that are decision layers that drive car behavior and responses. (This is all a horrible simplification of course). The thing that makes the FSD beta exciting is that the bottom-most visualization layer has been completely reworked and gives much more detailed and accurate information (thank HW3 for making the necessary processing power available for that).

So right now the car is running two distinct "stacks" .. the old stack for NoA and Smart Summon, and the new stack for City Streets (aka FSD). I'm sure as FSD stabilizes the Tesla engineers will migrate NoA and Smart Summon to use the new stack since (a) it will work far better, and (b) they only want to support one stack, not two, and (c) it will give a more fully integrated feel to AP overall, since the move from city streets to freeways will be seamless.
 
I'm sure as FSD stabilizes the Tesla engineers will migrate NoA and Smart Summon to use the new stack
There's been various videos of FSD beta driving fine in parking lots much better and faster than current Smart Summon. I would guess there's various Smart Summon regulations or restrictions, e.g., requiring continuous press, distance and speed limits, because the driver seat can be empty, and Smart Summon even with FSD beta is similarly slow as current behavior right now.

Tesla will probably monitor FSD beta performance in parking lots to prove safety to regulators. One would think Smart Summon (and reverse summon / auto park) might be easier to get approval for use in private lots than public roads. I suppose technically if it was approved, Autopilot would be SAE Level 4 restricted to parking lots.