Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
Regarding the acceleration rate of FSDb (on 2022.44.30.5), I've actually found that I needed to set everything to "chill" mode including the general acceleration profile to keep the acceleration less drastic. Without doing that it was going 0-45 Mph at probably 70% acceleration (on a MY LR) ie fast enough to feel a little awkward. Frankly, before doing that, it felt like riding with an average LA driver despite the traffic being minimal.
 
Without doing that it was going 0-45 Mph at probably 70% acceleration (on a MY LR) ie fast enough to feel a little awkward. Frankly, before doing that, it felt like riding with an average LA driver despite the traffic being minimal.
I'd like to see a video. I've never seen a video of FSD accelerating quickly, except very occasionally when turning incorrectly onto a road with oncoming tailing traffic. (Never experienced that myself because I don't let it happen.) Certainly straight-line acceleration is nothing like what you describe here. Ever. (That's specifically what I'd like to see video of because it is not something I have ever seen.)
 
  • Like
Reactions: impastu
The way I look at it .... a LOT of the features of V11 have already been introduced in FSD Beta. Only the "single stack" remains .... while marginally useful, its not something I personally care much about at all.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video

- Merging of city, highway and parking lot stacks a.k.a. Single Stack
I agree with all the new stuff that TADA (can't call it FSD now) is now doing, but that actually makes it rather more scary with regard to replacing AP.
First - is that AP doesn't do all that and seems mostly competent. Can it do that because it's basically oblivious to everything that TADA can process?
The flip side is that if now processes all that data will the timid teenager be put back in charge?
The other slightly concerning (understatement) thing is that on AP, the limited processing seems to "see" further than TADA can.
It seems that TADA can safely do unprotected lefts at 40mph, while 55mph is possible its much more conservative and at 65 its downright scary.
Having that in charge an highway speeds gives me the heebie-jeebies - that's 65-85 here in TX.
Even worse is imaging that off the interstate which is still up to 75mph in TX. When it doesn't see that tractor waay off ahead that just pulled onto the 70mph country road, then ends up needing emergency braking to slow down enough.
Don't want them to break AP :(
 
  • Funny
Reactions: SSDTester#1
the third vehicle getting 10.69.3.3 early does not have this (v11.x) update pending. So potentially even for "employees," this FSD Beta 11.x version isn't available to that whole group yet
Interestingly, with the current 20% rollout of FSD Beta 10.69.25.1 / 2022.44.30.5, this third "employee" vehicle that recently got both 10.69.3.3 and 10.69.25 ahead of pre-Safety-Score / early access group is part of a regular wave.

10.69.25.1 employee group.png


Maybe it got kicked out of the "employee" deployment group? Although it could be part of Tesla simplifying their deployment channels while tightening up leaks of FSD Beta 11. Tesla probably wants to end their complex release management of 1) internal 2) employees 3) early access 4) safety score 5) production groups, so I wonder if this is some evidence of a simpler 1) internal 2) strict NDA 3) production groups.

Maybe this is preparing for wide release of FSD Beta vehicle software to everyone in production? Or maybe FSD Beta 11 single stack networks in shadow mode?
 
  • Helpful
Reactions: scottf200
I wasn’t saying that FSD would never blow through a stop sign, just that we don‘t have any way of collecting statistics on how often things like this happen.
I agree with that sentiment. I saw that ridiculous article on Electrek about real self drive implementations releasing disengagements and Tesla's not real self drive not releasing them.
But vast majority of my AP disengages have nothing to do with safety and mostly navigation based.
Vast majority of my not-FSD disengages are absolutely because it did something stupid and the rest because it won't drive where I want to go.
 
I have a divided highway near me that, for some reason, FSD stays engaged on. It's a windy-ish road with typical speeds around 60mph. Mostly two lanes. FSD Beta does quite well, actually
I was just thinking about how poorly FSD does on the high speed state highways around me. It always sees traffic lights way too late and has to brake pretty hard. Braking is pretty hard in general, and it’s so much worse at 65 mph.

As much as I shill for FSD, I’ve been starting to wonder if Vision can see far enough to drive comfortably at high speeds.
 
I was just thinking about how poorly FSD does on the high speed state highways around me. It always sees traffic lights way too late and has to brake pretty hard. Braking is pretty hard in general, and it’s so much worse at 65 mph.

As much as I shill for FSD, I’ve been starting to wonder if Vision can see far enough to drive comfortably at high speeds.
No, it can’t (see far enough)…
 
I’ve been starting to wonder if Vision can see far enough to drive comfortably at high speeds
Taking the latest Dirty Tesla video with 10.69.25, within the first several seconds, it seems to visualize a vehicle turning to the next street about 650 feet / 200 meters away:
10.69.25 200 meters.jpg


At 80mph, that distance is covered in about 5.6 seconds. The Autopilot page claims a 250m max distance for the narrow forward camera, and neural networks could be trained to accurately predict objects such as a lead vehicle pulling away to the vanishing point even when it only occupies a few pixels. But then again, traffic lights at the vanishing point of interest isn't always directly ahead of the narrow camera such as on curves and crests.

Have people measured to see how far past versions of FSD Beta has visualized objects? I wonder if it has improved with 10.x in preparation for FSD Beta 11 highway driving?
 
The truth is ... even for humans driving fast on local highways with traffic lights is an issue. I regularly drive on such a road (WA-202) and FSD does a good job.
Kinda. FSD will likely do better than the worst human drivers in many cases. But attentive drivers who have been through a route before will know to check for the traffic light well in advance. We’re able to see what the color is or at the very least that traffic is building up far ahead (and can begin to let off on the accelerator instead of continuing at full speed).

I think FSD will eventually be safer than the average human driver who is driving on a route for the first time.

What I’d like to see is FSD be a safer driver than virtually any driver driving a route they’re familiar with who just drove their brand new expensive car off the SC lot and doesn’t have gap insurance.
 
Taking the latest Dirty Tesla video with 10.69.25, within the first several seconds, it seems to visualize a vehicle turning to the next street about 650 feet / 200 meters away:


At 80mph, that distance is covered in about 5.6 seconds. The Autopilot page claims a 250m max distance for the narrow forward camera, and neural networks could be trained to accurately predict objects such as a lead vehicle pulling away to the vanishing point even when it only occupies a few pixels. But then again, traffic lights at the vanishing point of interest isn't always directly ahead of the narrow camera such as on curves and crests.

Have people measured to see how far past versions of FSD Beta has visualized objects? I wonder if it has improved with 10.x in preparation for FSD Beta 11 highway driving?
I think a lot has to do with the size of an object vs the low resolution of the camera. While a car may cover a lot of pixels on the camera things like red lights, VRU or even objects in the road may not fill enough pixels on the camera at a distance to be properly identify at higher speeds.

Could be they are compensating by trying to "pre" identify with only a few pixels and getting tons of false positives/phantom braking in V11. Hope I'm incorrect but with delay after delay after delay after........you have to believe there masa be some fundamental problems.
 
Last edited:
I wonder why there is no more autonomous driving car racing such as 12 years ago the Audi TTS doing the Pikes Peak climb?

To give some perspective, the Model S prototype vehicle was unveiled in March 2009 and the Model S debuted on June 22, 2012.


 
Last edited:
Kinda. FSD will likely do better than the worst human drivers in many cases. But attentive drivers who have been through a route before will know to check for the traffic light well in advance. We’re able to see what the color is or at the very least that traffic is building up far ahead (and can begin to let off on the accelerator instead of continuing at full speed).

I think FSD will eventually be safer than the average human driver who is driving on a route for the first time.

What I’d like to see is FSD be a safer driver than virtually any driver driving a route they’re familiar with who just drove their brand new expensive car off the SC lot and doesn’t have gap insurance.
I would like to see FSD at least as good as an average human WITHOUT any human input, yoke/wheel jerking off, cabin camera, or any other nonsense.
 
  • Like
Reactions: jebinc
So you are waiting for Robotaxi ?
I guess you can put whatever name you want to it. I don’t need it to go out and make money while I sleep by taxiing people around. I want it to do full.. self.. driving. As in me not driving, and it fully self driving, while I chill in the driver seat doing other things, and take over ONLY in exigent circumstances.