Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
But it's not just processing power. There's are inadequate camera positions, an insufficient sensor suite (per the industry), insufficient quality of training data, and an abundance of edge cases. The way things are going, one could assume there's an iceberg of data below still unaccounted for. And last but not least, NNs that don't easily retain additional training data.

Said another way, all the CPU power in the world won't solve current FSD SNAFUs. There needs to be a balanced approach. Anything else is just marketing BS to possibly placate a smaller and smaller gullible customer base.
Sounds daunting. Watching Amazing Race and seeing some traffic from other 3rd world countries, if the goal is L5, it will be a very long time before it's solved. If there is control over the ODD, like Waymo, then much easier problem to solve. They'll need evolutions in technology to miniaturize the sensors, or it may not be aesthetically pleasing to the average buyer. I, personally, don't want my car looking like a Waymo. In the meantime, a well performing L2 solution would satisfy most people with a much more aesthetically pleasing car.
 
There's are inadequate camera positions, an insufficient sensor suite (per the industry), insufficient quality of training data, and an abundance of edge cases.

Rattling off a list of arguments without elaborating or explaining is a logical fallacy called a "Gish Gallop."

I can understand the argument for inadequate camera positions and insufficient sensors, although I disagree. But please, explain to us how Tesla has an insufficient quality of training data or an abundance of edge cases as compared to any other company working on an AV.
 
Here is one for the gang. A member of our family who will remain nameless managed to back our FSD equipped HW3 into a car over the weekend. I am sure the alarm actually went off but notwithstanding that .........

Now, some months ago, maybe longer I noted that the display updated to an overhead view during reverse, so although it was not me I am sure the car "knew" where the other car was.

Plus, for some time now, auto park, which specializes in FSD in reverse, has been great.

How long until they tweak it so that you can't run into something in reverse? I would settle for, "OK, reverse under 20 mph or something."
 
  • Informative
Reactions: FSDtester#1
No matter how many bush pics you post
This is not that sort of site, sorry!
Off topic for this thread but I found a work around for when my car refuses to change lanes when I use a turn signal. If I hit the accelerato, she makes the lane change. Seriously. Turn indicator, no response, hit accelerator, lane change. It is repeatable.
This is repeatable also for unrequested automatic changes. I’ve found it is helpful when there is a close vehicle in the target lane. The car does not like it when the vehicle is closing on you (or just too close). Accelerating changes that balance and the car decides it can go because the target lane is now “open” with no closing traffic. Even if the result is you end up too close to lead traffic and positioned totally non-optimally (between two cars) compared to manual driving. At least that is my impression.

Maybe it also applies when no vehicle is around, but I have not observed that.
 
Last edited:
Something to keep in mind when talking about what and how a Tesla uses camera data. It is 100% the RAW camera data and ALL data compression or ANY data modification to the RAW camera data IS done by the computer. So it makes NO sense that the RAW camera data would be compressed/modified by the computer and then sent back to the computer to be interpreted for driving. All camera data processing/compressing is strictly for HUMAN consumption.
 
  • Like
Reactions: pilotSteve
A member of our family who will remain nameless managed to back our FSD equipped HW3 into a car over the weekend
FSD Beta 11.4.4 introduced a feature to handle a similar but perhaps not quite your exact situation:
  • Improved Automatic Emergency Braking recall in response to cut-in vehicles and vehicles behind ego while reversing.

Has anybody experienced this AEB functionality? I'm guessing it might be for backing out of a parking spot or maybe 3-point turns for the "cut-in vehicles?" Although "vehicles behind ego" would also seem to suggest general stationary vehicles too?
 
FSD Beta 11.4.4 introduced a feature to handle a similar but perhaps not quite your exact situation:
  • Improved Automatic Emergency Braking recall in response to cut-in vehicles and vehicles behind ego while reversing.

Has anybody experienced this AEB functionality? I'm guessing it might be for backing out of a parking spot or maybe 3-point turns for the "cut-in vehicles?" Although "vehicles behind ego" would also seem to suggest general stationary vehicles too?
I think I am one update short of having that work. It looks like it might be 11.4.7 as well. Ugh.
 
No, because the car cannot "move" its "eyes", so it has to have the same acuity levels in all directions (which does give the car an advantage compared to humans).
It has to do with saving processing power. Rather than processing all pixels for every frame, the center of the FOV is processed with each frame and the periphery is processed for selected frames. It's a luxury to process every pixel for each frame when the camera is responsible for monitoring a specific direction.
 
Rattling off a list of arguments without elaborating or explaining is a logical fallacy called a "Gish Gallop."

I can understand the argument for inadequate camera positions and insufficient sensors, although I disagree. But please, explain to us how Tesla has an insufficient quality of training data or an abundance of edge cases as compared to any other company working on an AV.

Gotta love in-denial responses. They might make sense if this was the first time they were ever mentioned. Once again, you are always welcome to search for each item previously discussed ad nauseam. :)
 
Last edited:
Sounds daunting. Watching Amazing Race and seeing some traffic from other 3rd world countries, if the goal is L5, it will be a very long time before it's solved. If there is control over the ODD, like Waymo, then much easier problem to solve. They'll need evolutions in technology to miniaturize the sensors, or it may not be aesthetically pleasing to the average buyer. I, personally, don't want my car looking like a Waymo. In the meantime, a well performing L2 solution would satisfy most people with a much more aesthetically pleasing car.

Personally I'm a function over fashion type. Somewhat related we recently replaced a cracked MY windshield and the insurance company approved window installer charged $600 to align/recal the camera. Larger surface areas have a higher probability of rock strike so there might be another advantage of getting the camera off the windshield.

If only FSD L2 performed well after 7 years. I heard an interview this morning where the Uber ceo estimated another 5-10 years before anything practical is available. FSD L2 might be on that timeline.
 
Uber ceo estimated another 5-10 years. FSD L2 might be on that timeline.
This is what I thought nearly five years ago (actually I said it might not be possible in that timeframe, so I guess I was giving a minimum? I was a naïf then - and many here would likely argue I still am!). But now I think it is still probably 5-10 years. Or more.

But I am a bit more optimistic now; hardware and software are improving fast; 5-10 years may be possible now. Maybe.

It’s really hard though, especially to get broadly useful L2, with our cars driving us around basically everywhere in good conditions. Likely harder even than L4/L5, as I’ve said before.

In any case, it is much more clear to me now it won’t happen for my car. (I had HW 2.5 at the time, and HW3 capabilities weren’t completely clear.)
 
This is what I thought nearly five years ago (actually I said it might not be possible in that timeframe, so I guess I was giving a minimum? I was a naïf then - and many here would likely argue I still am!). But now I think it is still probably 5-10 years. Or more.

But I am a bit more optimistic now; hardware and software are improving fast; 5-10 years may be possible now. Maybe.

It’s really hard though, especially to get broadly useful L2, with our cars driving us around basically everywhere in good conditions. Likely harder even than L4/L5, as I’ve said before.

In any case, it is much more clear to me now it won’t happen for my car. (I had HW 2.5 at the time, and HW3 capabilities weren’t completely clear.)
Wow Alan, I read your post you linked from 4.5 years ago.
You had some pretty low expectations at the time and were dubious about it working soon back then. Some of the things you wanted have pretty much materialized, and some haven't.
Nice insight!