Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
Maybe the voice note option in 11.3 could be a precursor to a hazard reporting system. Eh, probably not.
Could be something like a cockpit flight / voice recorder? Capturing in cabin audio for before, during and AFTER an Event? They are tracking pretty much everything else in some pre-post time frame (with POSSIBLE exception of the in cabin camera) why not add some audio. ?
 
Potholes are a lot like porn. We all know hardcore porn when we see it but what about when you get into the gray areas? A large pothole with no cars around is easy to identify and avoid but what as the potholes become smaller and the identification becomes later and more jerk needed to avoid. How to handle if there are cars and/or VRUs around? At what point is avoiding more dangerous that going through. Lots of subjective behavior that will be hard to fine tune to satisfy most people.

EDIT: A good analogy is Speed bumps. Beta is coded to identify them but it is a hodgepodge of responses and non responses to them.

Beside the gray areas, humans recall and anticipate locations of potholes and speedbumps. Funny thing about the negative feedback loop when sensing hard wheel/suspension impact - emotional responses trigger memory storage. Not so much when FSD hits a pothole - memory storage is flatline.

In the near term FSD may not get to the promise land of being comparable to a normal human driving safety let alone X times better. There's so much driving context FSD isn't capable/able of sensing and not sure it is even possible in 10 years or so. Until then we'll likely continue with the clunky overpriced arcade video game version.
 
I agree with what you're saying.

Why does Tesla program their car to routinely break the law then? Is that just because at the moment they are L2.


Yup... all the state laws I'm citing to that require a vehicle maker to insure their vehicles obey all traffic laws are specifically discussing L3 and above systems.

I'm not aware of any state laws governing L2 and lower ADAS but I'm open to correction on that.
 
  • Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
  • Improved decision making at crosswalks by leveraging neural network based ego trajectory estimation in place of approximated kinematic models.
  • Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
Seems like at least these items are part of moving some of "control" to neural networks instead of legacy C++ heuristic approximations. Broadly these were controlling when and how much braking to apply in certain detected scenarios, so there's potential for even more of these to be converted for other situations.

It'll be interesting to see if people notice control improvements more than what has been listed so far or we'll need to wait for future 11.x. In particular, I'll be checking if 11.3 avoids braking for an intersection under a bridge.
Brakes lights are a good way to anticipate lead vehicle behavior as are brake lights to vehicles ahead of the lead vehicle or a distant traffic light turning yellow. Odd the team just started implementing inputs from lead vehicle brake lights.

Hopefully they have an adult sanity routine to monitor the new control NNs and minimize the number of seemingly impossible to solve and train NN edge cases.
 
- Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.

This should be a great help in smartening up the misbehavior that causes the majority of my interventions and lane-change overrides. I've been wanting a toggle that would essentially disable speed-based lane changes, but this will mostly be accomplished if the Chill setting starts actually doing its job.

What's not clearly referenced here are the mysterious phantom lane change threats that inappropriately activate the blinker, confusing everyone, and then disappear after three or four ticks. I think these have more to do with confusion in identifying turn vs. through lanes, so some of the other lane-related release notes will hopefully address those.

I'll be following the reports of the early-access influencer users with great interest.
Turn vs through lane confusion is my big problem with FSD beta. I’ve found that intersections that show the lanes correctly on the vector google map view work fine, but the ones that are not marked are misidentified visually frequently. I wish Tesla would allow us to “decorate” their maps with additional detail showing correct lanes, driveway locations, etc, to allow the car to learn from experience rather than repeating the same mistake every day.

I liked FSD beta quite a lot but have this lane problem requiring intervention frequently in my neighborhood - drove my wife crazy as a passenger. I will resubscribe again after a few more iterations. I am optimistic that they can solve this but will wait until they do.
 
Last edited:
EDIT: also this suggests there might be a new fence visualization for denoting where the car will stop?
why would there be a stop visualization for highway driving? Even if there's traffic, it may even start moving before you stop at the line. Just doesn't seem very helpful. What would be nice, is to have that fence like line for setting distance preference. Changing the distance making the red line move closer and further, so you can visualize how far back you'd like to stay from the car in front of you
 
why would there be a stop visualization for highway driving? Even if there's traffic, it may even start moving before you stop at the line. Just doesn't seem very helpful. What would be nice, is to have that fence like line for setting distance preference. Changing the distance making the red line move closer and further, so you can visualize how far back you'd like to stay from the car in front of you
I suspect it’d be for city driving mostly
 
So every time Beta starts to slow it will draw a red line on the screen? Wonder if this is a NHTSA thing or just something Tesla added? Guess it is not a bad idea but if the braking parameters change like a VRU steps out will the red line move close as it brakes harder?

Also when coming to a Red Light or Stop Sign will it draw the line on the Stop Line?

Damn, hope Cook and Chris (DirtyTesla) get it soon, like tonight.
 
The recall release doesn't mention any accidents related to rolling stops, only a "risk."
Right, but the recall was actually far more hedged that just "risk".

The recall did not actually cite a risk. Instead, it cited a potential, possible increase in risk of a collision:
In specific and rare circumstances,​
With FSD Beta Engaged and,​
if the car makes certain driving maneuvers and​
the driver doesn't intervene,​
it could,​
potentially infringe,​
and increase the risk.​

I think we could call this collection of hedging terms "certain linguistic maneuvers which dramatically increase the risk of meaning nothing." Further, the recall description of the defect itself starts by stating that in this (and all Level 2 systems) the driver is responsible and "must intervene as needed" to maintain safety. Had there been a statistically significant number of collisions in those specific and rare circumstances, they'd have said so. As far as we know, the actual number of such collisions is zero, so the risk increase, while logical, is very small and only hypothetical.

Here again is the NHTSA's Description of the Safety Risk :

In the specific and rare circumstances described above when a Tesla vehicle is operating with a software version of FSD Beta as described below and with FSD Beta engaged, certain driving maneuvers could potentially infringe upon local traffic laws or customs, which could increase the risk of a collision if the driver does not intervene.​
I think FSD should not do stupid stuff, and that the suggested improvements really will make it safer. However this "recall" is slowing down the ongoing improvement process.

I do worry a bit that if FSD does eventually get substantially better, it will tend to lull drivers into complacency, so FSD will need to be really, really good. On the other hand, without FSD we are killing around 40,000 people a year on US roads, so there is plenty of low hanging safety fruit to harvest along the way.
 
  • Funny
  • Like
Reactions: GSP and JHCCAZ
Wonder if this is an employee only feature and won't be in the consumer release/Notes? No way would they remove Camera data feedback and give 400k the ability to record "analog" messages that MUST be listened to then "decoded", tagged and labeled by humans. That would requires 10s of thousands of human employees to accomplish.

View attachment 909272
May be they have found the camera record button is not that useful - to understand what actually happened. So, instead of getting 300k x 5 camera clips per day they can't understand - they can analyze the voice messages (which I'm guessing will also trigger camera record). Together they can make out what happened much better.

I've always wanted something like this - so glad to see it. Other option would be for us to have a web interface where we can see each of our interventions and add text annotation. Not many are going to use that though ...
 
Tesla themselves cite at least 18 warranty claims resulting from the issues in the recall.

What do you suppose those claims were if nothing was damaged by hitting something?
I think that just means people who were disgruntled about FSD performance and decided to claim it as a warranty issue ('I didn't get what I paid for"), which is a topic of constant heated debate.

I assume Tesla is required to report those claims and cannot rush them aside just because they disagree that it's a warranty issue.

It doesn't mean there was collision damage, as that is not covered by warranty anyway.
 
I think that just means people who were disgruntled about FSD performance and decided to claim it as a warranty issue ('I didn't get what I paid for"), which is a topic of constant heated debate.

I assume Tesla is required to report those claims and cannot rush them aside just because they disagree that it's a warranty issue.

If the claim isn't actually covered by warranty it wouldn't be a warranty claim. There's pretty specific accounting rules on that.

If a system on the vehicle fails under warranty, leading to damage, that repair is absolutely a warranty claim.



A warranty claim is not necessarily an accident, anything else is speculation.

Then what else COULD it be? I'm not asking you to tell us what it was, I'm saying what other possibilities even exist, given the rules around warranty claim accounting, and the specific things the recall covers? It's entirely possible I'm not thinking of a possibility and am totally open to hearing what else would fit the info we DO have?