Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
And how well did it perform the next day? Or the next week? Or month? Humans are very good at finding apparent patterns in random data. There is no reason to believe, based on the statistically insignificant sample set, that the changes in behavior are from “learning.” Rather, they are just variations based on slightly differing initial conditions.
Quite possible, and yep extrapolation from such a tiny dataset is dubious. As I said, it was speculation only. But I see nothing technically tricky here .. its basically the car having a small cache of annotations for short-term use. No idea if its actually there or not (and it's not really "learning" in the sense of modifying the NNs).
 
  • Like
Reactions: sleepydoc
Quite possible, and yep extrapolation from such a tiny dataset is dubious. As I said, it was speculation only. But I see nothing technically tricky here .. its basically the car having a small cache of annotations for short-term use. No idea if its actually there or not (and it's not really "learning" in the sense of modifying the NNs).
Having some metadata stored locally to help with planning should be fairly easy technically.
 
Source? I have not noticed this. Where the car fails, It fails every time. I don’t believe there is any memory
Well, I observed this myself, but the issue needs to be clearly maps related.
For example there was a number of videos from the original fsdbeta bunch (like one with the new traffic circle that did not work the first time, worked after that).
Similarly I personally observed a situation where going into a turn that looks like an intersection, but is not (short distance to a dead end), the first pass - car tries to go straight, runs into a deadend, AP aborts. I turn around, try to repeat it on video, the car boldly turns without any attempts to go straight.

This does not mean any other situation would be remembered and the buffer for this is also rather limited in distance.

Edit: so you better understand the conditions, it's here: Google Maps
 
Well, I observed this myself, but the issue needs to be clearly maps related.
For example there was a number of videos from the original fsdbeta bunch (like one with the new traffic circle that did not work the first time, worked after that).
Similarly I personally observed a situation where going into a turn that looks like an intersection, but is not (short distance to a dead end), the first pass - car tries to go straight, runs into a deadend, AP aborts. I turn around, try to repeat it on video, the car boldly turns without any attempts to go straight.

This does not mean any other situation would be remembered and the buffer for this is also rather limited in distance.

Edit: so you better understand the conditions, it's here: Google Maps
My experience is that FSD consistency fails at the same location every single time. So consistently that I have quit using 10.2 because there are several turn lanes close to my house that FSD will veer into and require me to take over. Like I said, if it does learn, it certainly doesn't learn well. I'll also say that if it 'forgets' after leaving the area then it's pretty worthless.
 
My experience is that FSD consistency fails at the same location every single time. So consistently that I have quit using 10.2 because there are several turn lanes close to my house that FSD will veer into and require me to take over. Like I said, if it does learn, it certainly doesn't learn well. I'll also say that if it 'forgets' after leaving the area then it's pretty worthless.
I understand what you are saying. What's the location, anyway, what kind of failure mode?
Not everything could be "papered over" by using this local memory of the terrain
 
My experience is that FSD consistency fails at the same location every single time. So consistently that I have quit using 10.2 because there are several turn lanes close to my house that FSD will veer into and require me to take over. Like I said, if it does learn, it certainly doesn't learn well. I'll also say that if it 'forgets' after leaving the area then it's pretty worthless.
It is no so much that it fails the same way. It is that every time is the FIRST time and it must figure it out anew. Sometimes it figures slightly different than the last time and may even "seem" to remember. That is just anthropomorphizing since every time FSD pulls out onto the street in front of your house it is seeing it for the first time.
 
  • Like
Reactions: boonedocks
I understand what you are saying. What's the location, anyway, what kind of failure mode?
Not everything could be "papered over" by using this local memory of the terrain
Here’s one spot - when heading northeast on 101 the car will very reliably enter the right turn lane, despite needing to go straight (and the map indicating that it’s going straight.) At the end of the turn lane it will sometimes veer back Into the driving lane or sometimes continue straight on the shoulder until the shoulder disappears for the next turn lane, then it gets confused and starts squaking.

The issues with turn lanes are pretty well known with 10.2 and a new bug but regardless, it hasn’t learned how to go straight.

1648729869440.png
 
  • Like
Reactions: APotatoGod
It is no so much that it fails the same way. It is that every time is the FIRST time and it must figure it out anew. Sometimes it figures slightly different than the last time and may even "seem" to remember. That is just anthropomorphizing since every time FSD pulls out onto the street in front of your house it is seeing it for the first time.
Huh? If it’s finding a new way to fail every single time and never manages to get it right then functionally it’s not learning how to navigate the street. From a driver‘s perspective I can say there’s nothign different. It fails in the same way every time.
 
.....
..... At the end of the turn lane it will sometimes veer back Into the driving lane or sometimes continue straight on the shoulder until the shoulder disappears for the next turn lane, then it gets confused and starts squaking.
.....It fails in the same way every time.
Contradiction: Does it fail the same way every time or does it sometimes fail differently like it it is seeing it for the first time and then deciding what to do?
 
Here’s one spot - when heading northeast on 101 the car will very reliably enter the right turn lane, despite needing to go straight (and the map indicating that it’s going straight.) At the end of the turn lane it will sometimes veer back Into the driving lane or sometimes continue straight on the shoulder until the shoulder disappears for the next turn lane, then it gets confused and starts squaking.

The issues with turn lanes are pretty well known with 10.2 and a new bug but regardless, it hasn’t learned how to go straight.
I have a very similar situation near me - FSD always puts me in the right turn lane, despite needing to go straight. The only difference compared to your scenario is that FSD has ALWAYS failed at this particular spot for me, ever since 10.2 (not 10.10.2, literal 10.2). Just from what I gather from reading about others' experiences on this forum, a lot of how FSD performs seems to be based on map data. I'm looking forward to FSD using more of what it sees vs. what the map says in the near future, because apparently the maps in my area are utter garbage :)
 
pure speculation here but i'd be shocked if the cars didn't do some kind of local "learning", tesla would be shooting themselves in the foot if they didn't. that said, what form that takes might make it hard to discern. IMO it would be far too risky to allow the car to just learn new things and try them out, so assuming they do this, they likely have a strict set of conditions that if the car encounters it's allowed to try some limited remapping/testing to see what the results would be.

i would not expect these changes to stick tho, my guess is they would be tested in that moment and then the results would be transmitted back to home base for analysis and possible incorporation. not impossible, under this supposition, that there could be SOME things that do stick to further confuse the issue but that would explain why it "learns" in some situations and not others and why those "learned" behaviors are somewhat unreliable. again just conjecture from observation :)
 
  • Like
Reactions: sleepydoc
Contradiction: Does it fail the same way every time or does it sometimes fail differently like it it is seeing it for the first time and then deciding what to do?
Not really - the failure is entering the turn lane in the first place which it does every time, without fail.

As far as how it handles things at the end of the turn lane I should clarify that I‘m not sure if it does the same thing every time it gets to the end of this particular turn lane or not. FSD 10.2 makes a similar mistake at multiple turn lanes that I encounter. I’ve learned which ones cause problems (unlike the car!) and disengage FSD prophylactically to avoid the issue so it’s been several weeks since I’ve let the car navigate the spot I showed above and at this point I can’t recall what it does at the end (or if it’s consistent or not.)

I have noticed that the presence of a car in front of you will change behavior. FSD tends to follow the car in front so it will sometimes veer further over towards a turn lane if that car is turning. Once it realizes there are actually 2 lanes and it has to choose it will often times choose the turn lane rather than the straight lane if it’s far enough over.
 
pure speculation here but i'd be shocked if the cars didn't do some kind of local "learning", tesla would be shooting themselves in the foot if they didn't. that said, what form that takes might make it hard to discern. IMO it would be far too risky to allow the car to just learn new things and try them out, so assuming they do this, they likely have a strict set of conditions that if the car encounters it's allowed to try some limited remapping/testing to see what the results would be.

i would not expect these changes to stick tho, my guess is they would be tested in that moment and then the results would be transmitted back to home base for analysis and possible incorporation. not impossible, under this supposition, that there could be SOME things that do stick to further confuse the issue but that would explain why it "learns" in some situations and not others and why those "learned" behaviors are somewhat unreliable. again just conjecture from observation :)
Except that nothing like this have ever been mentioned at AI day or any other blogs/posts by Karpathy. And it is inconsistent with the deep learning model in use. Training is done in massive data centers and once complete, the static network is deployed to the fleet. There is no evidence to suggest any other architecture.
 
Except that nothing like this have ever been mentioned at AI day or any other blogs/posts by Karpathy. And it is inconsistent with the deep learning model in use. Training is done in massive data centers and once complete, the static network is deployed to the fleet. There is no evidence to suggest any other architecture.
Yes, it’s very doubtful that the neural nets are actually learning anything in the car as (as in the neutral net weightings being changed). However, it would seem sensible for the car to locally store additional hints and enhanced map-related detail data that it “learns” while repetitively driving streets and intersections. I have no idea if it actually does this today.
 
  • Like
Reactions: drtimhill
]
Yes, it’s very doubtful that the neural nets are actually learning anything in the car as (as in the neutral net weightings being changed). However, it would seem sensible for the car to locally store additional hints and enhanced map-related detail data that it “learns” while repetitively driving streets and intersections. I have no idea if it actually does this today.
Exactly, and that's all that was being discussed. The NNs dont come into the equation, all you are doing is annotating the map to say something like "this lane is closed" so that the car wont try to get into the lane. As has been noted, it's unlikely that the car does this at present (though its not hard to do, and has nothing to do with the NNs), though @verygreen has said that he thinks the car can do this. Probably, long-term it will do something like this, possibly at a fleet level a la Waze, but that is way down the road from now :)