Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
I'm not an AI expert by any means, but based on my limited understanding, that's actually not encapsulated in #1, which was:



Obviously the simulations don't have enough data to cover all of the interesting edge cases that people are seeing, or else we wouldn't be seeing them. We don't know if they're even close; it could be five orders of magnitude too few.

More importantly, you can never assume that generated simulations created by a GAN (generative adversarial network) will ever become a representative sample of real-world conditions. GANs, for folks who aren't familiar or need refreshing, generate new training data by imitating existing training data. In Tesla's case, this means creating new sequences of input video frames from multiple angles that could plausibly occur in the real world, using a large corpus of existing input video as examples of what the real world looks like.

The problem is, you really can't assume that the underlying training data used to train the GAN is sufficiently diverse (or even within orders of magnitude of being sufficiently diverse). Thus, they potentially will never be able to find huge swaths of edge cases without more training data, because GANs trained with the existing training data will never rule in those types of edge cases as being plausible.

And at that point, somebody has to tell the fleet to send back mountains of video clips with specific combinations of tags, e.g. pedestrians standing in a bike lane or whatever (yes, this example is a joke, but you understand the point) and add them to the training sets for the GAN. For the problems that they know about, simulation is a great way to iterate and improve that aspect of the model, but at some point, it's like playing Whac-A-Mole. You're never going to find every possible unusual road condition that way, because there are just entirely too many, realistically speaking.

Now here's where it gets interesting. If there are cases where a driving model is just marginally good enough, there's a decent chance it is marginal because there isn't much driving model training data that covers those edge cases. If a model change negatively affects a large number of those marginal cases, and if those happen often enough, the average overall behavior of the driving could get worse even when it massively improves some other problem that occurs less frequently than the sum total of all of those individually rare edge cases.

Worse, because such edge cases are underrepresented in the training data, the simulation results won't necessarily tell you that the average behavior is going to get worse unless they're somehow compensating for that underrepresentation (and if they knew that those edge cases were underrepresented, they presumably wouldn't still be underrepresented in the training data, so that seems unlikely to actually be possible).

Thus, it is entirely possible for a release to seem better on average in simulated driving before the release and still be considerably worse on average in the real world, particularly if the vehicles chosen for the early rollouts are not a representative sample of the real world. :)

And streets in San Francisco, Palo Alto, Mountain View, Fremont, and other similar areas are likely massively overrepresented in both the rollouts (particularly in the early stages involving employees) and in captured data, simply because they are massively overrepresented in terms of the number of cars on the road. So unless the experiment design is quite nonrandom, massively biasing vehicle selection based on geographical location in an effort to balance out the nonrandom geographical distribution of the vehicles themselves, we can probably safely say that neither the data that feeds into the simulations nor the the early rollout vehicles are likely to be particularly representative samples of the real world.

So here's what I'm wondering: Why doesn't Tesla allow the MCU to upload a firmware supplement bundle to the FSD computer that adds a few new models that run in shadow mode for comparison purposes? If a fault occurs while running one of those models, ignore the fault, stop running the model, and report the failure. If they kept the models entirely in RAM to minimize flash wear, it seems likely to be mostly harmless.

With that approach, Tesla could silently push bundles out to a large percentage of the fleet when on Wi-Fi on a daily basis, enabling them to get much more data about how each model change improves things or makes them worse. Assuming they have enough people to then analyze the incoming telemetry data or manually tag or verify AI-based auto-tagging of video captured when the driving decisions generated using the outputs from those shadow models would be too different from the driving decisions based on the actual prod model outputs (where feasible), they would be able to potentially iterate on the models more quickly, rather than waiting for a release push and hoping that it actually makes things better.

Alternatively, why doesn't Tesla build NNs that are trained on their simulation data's metadata — things like how often certain combinations of tags occur in close proximity, how often particular tags move along certain vectors, etc. — and run those on every car in the fleet in an effort to identify road features, conditions, behaviors, etc. that are underrepresented in the simulation training data, and then capture more data to cover them? (I'm assuming they don't do this.)

Or both. My vote would be both.

Anyway, it seems to me that simulation is great, and it is absolutely critical as a part of the QA process, but assuming the telemetry is good enough and assuming there's enough extra horsepower to do it, daily updated live A/B experiments at the model level seem like a better way to move fast and (pretend to) break things. 😁
You really wrote a 14 paragraph response. God bless.
 
So there are now FOUR main firmware versions with FSD beta 11.3.6:

2023.12.10
2023.12.11
2023.20.6
2023.20.7

WTH?? Is this Tesla getting ready to deploy FSD to the main branches with a relatively stable FSD version?

If so, are they keeping us OG testers in a bleeding edge branch (i.e. 11.4.x)?

I’d personally much rather be a bit behind on FSD versions but ahead on main feature releases than the other way around. How do I get off 11.4.4 and onto the latest 2023.20 branch with FSD 11.3.6?
 
Amused by what I think was a "lane departure warning" while the car making a right turn with FSDb (S De Anza Blvd to McClellan Rd). Was the turn being less than 90 degrees confusing it? This was during a drive where the car was pretty confidently executing turns and lane changes.

While turning left from W Maude Ave to the 237 Service Road the car doesn't correctly make the turn (I have to intervene), it appears that it was going to straight. In earlier FSDb builds it would make the left but would be unsure about positioning. Then there were builds that could execute it. The most recent builds fail. It's another less than 90 degree turn. I wonder if the car is having trouble localizing (and either failing to build one with that it sees or not trusting it) on these turns that are not perfectly 90 degrees.

If there are no islands when turning left it really likes to clip the corner. It will adjust to accommodate traffic in the other lane, but I find it still cuts a little close. Even the car "thinks so" sometimes because it ends up swerving on occasion. If I was manually driving, I would have taken it a little wider and would have not needed to make the adjustment.

The car thinks there is "stop line" as it turns left from West Maude Ave to Borregas Ave in Sunnyvale, CA. The display shows the chevrons to a line but there are no stop signs or traffic lights being displayed. Am I remembering correctly that Tesla uses OpenStreetMap? Looking at the section I see that there is segment change here, could that be part of the problem? The issue doesn't occur going south bound on Borregas towards Maude. Hmm, this turn is not quite a 90 degree turn also.
 
So there are now FOUR main firmware versions with FSD beta 11.3.6:

2023.12.10
2023.12.11
2023.20.6
2023.20.7

WTH?? Is this Tesla getting ready to deploy FSD to the main branches with a relatively stable FSD version?

If so, are they keeping us OG testers in a bleeding edge branch (i.e. 11.4.x)?

I’d personally much rather be a bit behind on FSD versions but ahead on main feature releases than the other way around. How do I get off 11.4.4 and onto the latest 2023.20 branch with FSD 11.3.6?
If you subscribe to FSD then you can cancel your subscription and you will receive an update. If you bought FSD you are treated as 3ed class and stuck perpetually behind (other than the holiday update). Tesla needs to update ALL FSD Beta's to a current branch and not penalize the people that are helping FSD Beta the most. [/rant]
 
Worked a night shift last night so had a nice early drive home with minimal traffic. Thought it was an okay time to see how it did with minimal variables.

Like my first post in this thread, it’s “planner” sucks. There seems to no rough draft plan to get where you need to go other than coarse navigation data. Therefore the car is constantly out of position on major arterial roads. It also is on freeways but you have more time to force it to merge towards the exit.

FSD cannot cross multiple lanes of traffic to make a turn.

Constantly picks the wrong lane for upcoming turns on well marked city streets despite little traffic. Requires take over to make turn lanes.

Definitely does NOT like narrowing roads of any type. Will consistently stay in the right lane despite the road narrowing and drive all the way to the mandatory merge.

— interesting side case of this is an unmarked pavement road that narrows as I get close to my home. FSD wanted to drive me into a ditch and hit a mailbox rather than center itself on the road as it narrows. There was no opposite direction traffic to avoid.

Definitely not for the faint of heart or someone who is not paying constant attention and knows the local roads
 
Took my first highway drive on 4.4 yesterday to the mountains southeast of town, about 120 miles each way. I only engaged on the highway portion, about 105 miles each way, 70 on I10 and 35 on a 2 lane. The 2 lane has long straight stretches but also a variety of bends, many of them sharp, and dips of various depths. 4.4 performed almost flawlessly for me the entire trip. Zero phantom braking events or even slowdowns while cresting the big dips as I had experienced on all previous versions. It slowed naturally for the sharp bends on the 2 lane and accelerated out of them with easy, probably better than I would have done, holding its place in the lane throughout. Though I was always ready to take over I never had to. On I10 the entire trip was smooth and uneventful. I had the speed set at 80 and when faster cars came up behind it signaled and moved to the right just as I would, not waiting until they rode my bumper. All and all an excellent drive!

As I really only plan to use it on highway trips I must say if it kept performing like this I'd be happy :).
 
If there are no islands when turning left it really likes to clip the corner. It will adjust to accommodate traffic in the other lane, but I find it still cuts a little close. Even the car "thinks so" sometimes because it ends up swerving on occasion. If I was manually driving, I would have taken it a little wider and would have not needed to make the adjustment.

Agreed. FSD has been doing that for too long. It's unsafe and inconsiderate.

Wonder if it's a training generalization or path is allowing steering control to force smooth arcs with disregard to road markings.
 
Agreed. FSD has been doing that for too long. It's unsafe and inconsiderate.

Wonder if it's a training generalization or path is allowing steering control to force smooth arcs with disregard to road markings.
Two observations:
IME, FSDb will take a left turn tighter if there’s no car in the lane to the left of the destination lane - not unlike a human driver.

Also, taking a left turn wider is more likely to put you in the path of oncoming traffic.

One consistent problem I’ve observed is if the car is in the right/wide lane when there are 2 left turn lanes it will veer wider than necessary. I’m guessing this is to avoid cutting off the car to its left but if there’s an oncoming car turning left it can’t handle it and ends up stopping in the middle of the intersection.
 
So there are now FOUR main firmware versions with FSD beta 11.3.6:

2023.12.10
2023.12.11
2023.20.6
2023.20.7

WTH?? Is this Tesla getting ready to deploy FSD to the main branches with a relatively stable FSD version?

If so, are they keeping us OG testers in a bleeding edge branch (i.e. 11.4.x)?

I’d personally much rather be a bit behind on FSD versions but ahead on main feature releases than the other way around. How do I get off 11.4.4 and onto the latest 2023.20 branch with FSD 11.3.6?
It will always be the other way around until FSD development slows down and doesn’t take so long between versions. If you wonder why, just think about the development process.
 
Today 4.4 cut in front of oncoming cars to make a left turn when it should have stopped and waited. Since the driver in the other lane must have been already terrified or at least angry I goosed it and turned hard into the turn, using insane mode acceleration to avoid a potential mess. Even if my car was not at risk of being hit the oncoming driver would have been forced to slow down and a rear end collision could have happened in the following phalanx.

I disengage FSDb often for various convenience issues but this was terrifyingly unsafe.
 
  • Informative
Reactions: pilotSteve
Today 4.4 cut in front of oncoming cars to make a left turn when it should have stopped and waited. Since the driver in the other lane must have been already terrified or at least angry I goosed it and turned hard into the turn, using insane mode acceleration to avoid a potential mess. Even if my car was not at risk of being hit the oncoming driver would have been forced to slow down and a rear end collision could have happened in the following phalanx.

I disengage FSDb often for various convenience issues but this was terrifyingly unsafe.
Are you sending feedback for these events? They need to know. I wonder what type of monitoring is happening when feedback is not sent.
 
Low dose ketamine is being used quite often to treat depression. I’m not in psychiatry but from what I understand it’s been quite beneficial for some.
Ketamine has been on shortage for me in the ICU. I like it for many uses including procedural sedation but it is tough to get (probably because people are going to clinics and getting disassociated for mental health stuff).
 
  • Like
Reactions: Pdubs and sleepydoc
Ketamine has been on shortage for me in the ICU. I like it for many uses including procedural sedation but it is tough to get (probably because people are going to clinics and getting disassociated for mental health stuff).
Same for us - it’s a wonderful narcotic-sparing analgesic adjunct. I use it in the OR all the time and it’s frustrating when we can’t get it. Ketamine’s an old drug that’s seen quite a resurgence in recent years. I have no idea what the underlying. Abuse of the shortage is but the increased use across the board may certainly be part of it. Hopefully the supply stabilizes soon.
 
  • Like
Reactions: Pdubs