Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
A single set of NNs that does everything, whatever that means. Meaning they don’t to a switchover of NNs like they do currently (presumably they run both or maybe the switchover is really fast?). It’s all a bit meaningless and arbitrary. Seems like there is no reason to not optimize for different scenarios (other than capacity issues or switchover time issues of some form or another).

Obviously implementation details important from a design and engineering standpoint but in the end we don’t give a 💩 what is going on. Nothing matters. Just has to work extremely well.

I don’t care about USS and parking lots. Those can be excluded as requirements and I still think it won’t be single stack.
I'm assuming the difference between the two is that AP is much more basic than FSDb and is making programmatic simplistic decisions, whereas FSDb is making much more use of neural networks. Elon did mention in one of his tweets that they were replacing C++ with NN.
That is what makes me concerned, the highway is no place for the indecision that FSDb often demonstrates.
Trying to use FSDb in commute traffic in town just means constantly missing your exit or driving manually.
 
Elon did mention in one of his tweets that they were replacing C++ with NN.
That is what makes me concerned, the highway is no place for the indecision that FSDb often demonstrates.
FSDb does ok at freeway like speeds of 65 mph on exurban/rural routes for me.

In any case, they seem to be making this switchover very cautiously, which is good.
 
  • Informative
Reactions: APotatoGod
I'm assuming the difference between the two is that AP is much more basic than FSDb and is making programmatic simplistic decisions, whereas FSDb is making much more use of neural networks. Elon did mention in one of his tweets that they were replacing C++ with NN.
That is what makes me concerned, the highway is no place for the indecision that FSDb often demonstrates.
Trying to use FSDb in commute traffic in town just means constantly missing your exit or driving manually.
For me, the difference is I relax on the highway because in general, AP is much more reliable and consistent. IME, AP truly could be made level 3, unlike FSDb.
 
  • Like
Reactions: APotatoGod
For me, the difference is I relax on the highway because in general, AP is much more reliable and consistent. IME, AP truly could be made level 3, unlike FSDb.

The funny thing is that AP is more reliable at the simple stuff, but much less reliable at anything outside of simple. Unusual lane geometries, small overlaps in a lane with other vehicles, non-vehicle obstacles, etc. I'm reminded of how rudimentary AP actually is compared to FSDb every time it goes over a bump. On AP, a bump will cause the entire visualization to bounce up and down with your car, showing that it doesn't really have an understanding of the world around it; it's just a neural network finding lane lines in a still image. Meanwhile, FSDb knows that the whole world doesn't bob up and down when the cameras do.
 
Elon did mention in one of his tweets that they were replacing C++ with NN.
That is what makes me concerned, the highway is no place for the indecision that FSDb often demonstrates.
FSDb does ok at freeway like speeds of 65 mph on exurban/rural routes for me.

In any case, they seem to be making this switchover very cautiously, which is good.
 
  • Like
Reactions: jeewee3000
I honestly wouldn't mind if Tesla released V11 to a limited group, and set the max speed on freeways to 65MPH. Then validated it for a few weeks, and slowly increased the speed with updates until they find the max speed it can safely handle. Hopefully that number is the same as the current max of 85MPH. Then release it to the fleet.
Why would this even be necessary? They know what it can handle even before releasing it to anyone!

Everything else is unsimulated cases.
 
I honestly wouldn't mind if Tesla released V11 to a limited group, and set the max speed on freeways to 65MPH. Then validated it for a few weeks, and slowly increased the speed with updates until they find the max speed it can safely handle. Hopefully that number is the same as the current max of 85MPH. Then release it to the fleet.
I was on a trip last week and on a divided four lane (two lanes in each direction) state route in rural California it switched to fsd-b and it had no issue doing 72 mph (65 speed limit + 7) on that road.
 
  • Like
Reactions: Dewg
Freeways are a different world where it pays to keep an eye on more distant traffic to avoid last second hard braking and bunching-up and I'm not sure FSDb has ever been able to do that. Adept human drivers can anticipate and FSDb only responds after something happens.

And as usual V11 will probably behave well when there's little to no traffic, lane changes, braking, merging, ... but otherwise I predict more of the same old FSDb city streets-like behaviors with added V11 color.
 
  • Informative
Reactions: APotatoGod
Freeways are a different world where it pays to keep an eye on more distant traffic to avoid last second hard braking and bunching-up and I'm not sure FSDb has ever been able to do that. Adept human drivers can anticipate and FSDb only responds after something happens.

And as usual V11 will probably behave well when there's little to no traffic, lane changes, braking, merging, ... but otherwise I predict more of the same old FSDb city streets-like behaviors with added V11 color.
I regularly use FSDb at 65 mph on 2-lane roads with other traffic, traffic lights, stop signs, etc. It handles these situations fine. Certainly no worse than NOA encountering a traffic jam.
 
  • Like
Reactions: EVNow
Freeways are a different world where it pays to keep an eye on more distant traffic to avoid last second hard braking and bunching-up and I'm not sure FSDb has ever been able to do that
FSD Beta is much more capable in tracking multiple objects and making predictions about their upcoming behaviors than legacy Autopilot stack. However, this also comes with it wanting to react to things that might not actually be issues because now there's more potential for false positive predictions. For example, a nominal case of just cruising along with nothing unusual is probably fine for FSD Beta already:

us101-fsd-77mph-jpg.785956


But if say occupancy network incorrectly predicts something is ahead blocking the path like a cone that's actually in the median area, it might decide to suddenly slow down. Similarly, if something is incorrectly predicted as a pedestrian when traveling at highway speeds, the usual behavior of giving space to the vulnerable road user would probably result in a lot of jerking around.
 
Why would this be different with FSDb compared to AP/NOA ?
There wasn't much expectations from the more minimalistic featured AP and its driver approved lane changes. Heck, most of us consider AP to be a good compromise which is a far cry from FSDb city streets. Less is more in this case.

Freeways require smooth, more refined, and predictable behaviors. I don't think the team will get away with the current crutches used on city streets like excessive slowing, crawling, ignoring line markings, robotic turns, excess time to determine path, jerky steering, insufficient regen use, and bot-like braking/acceleration.

I think in some ways the vision might work better on the freeway given most of the traffic is north/south versus east/west (perpendicular) directions where the team still seems to struggle with inaccurate object tracking (range, velocity, path).

There could be magic with V11 changes but I suspect almost all of it happens after HW4 integration and optimization.
 
Last edited:
FSD Beta is much more capable in tracking multiple objects and making predictions about their upcoming behaviors than legacy Autopilot stack. However, this also comes with it wanting to react to things that might not actually be issues because now there's more potential for false positive predictions. For example, a nominal case of just cruising along with nothing unusual is probably fine for FSD Beta already:

us101-fsd-77mph-jpg.785956


But if say occupancy network incorrectly predicts something is ahead blocking the path like a cone that's actually in the median area, it might decide to suddenly slow down. Similarly, if something is incorrectly predicted as a pedestrian when traveling at highway speeds, the usual behavior of giving space to the vulnerable road user would probably result in a lot of jerking around.
Yes, gawd forbid there's any pedestrian on the side of the road!