Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
Um. The recall notice on the NHTSA's web site, previously posted, specifically said that all Teslas leaving the production lines since, I think, the first or second week in December Have The Fix.

So, no need to stop shipping, I take it.
Vehicles were produced before the 7th, that had been shipped to delivery centers but not delivered and had to be held to get the update. (I think they have all had the update and been released at this point, but some people had their delivery delayed by up to a week because of the recall.)
 
You may well be right about this with your argument about perils of reducing ODD - but remember we actually have no data which would allow us to investigate the possibility of it helping lower accident rates in the cases you mention. Would be nice to have that!

Unfortunately Tesla has never attempted to provide this. (It’s actually quite difficult and subject to criticisms about methodology so I don’t really blame them for just providing their raw data (which doesn’t provide any info relevant to this topic unfortunately).)

Just important to remember that the demonstrable lower accident rates when using AP do not imply lower accident rates with AP in comparable situations. It’s even possible that using AP increases accident rates (though I kind of doubt it). The data showing lower accident rates with AP unfortunately does not allow us to conclude one way or another.

(Similarly, it’s been pointed out that collision rates with animals are certainly higher when headlights are on (ignore DRL here) - but of course that doesn’t mean that you should drive around with your headlights off!)

Anyway I guess this is kind of OT.
In a thread about the WaPo article, I posted a back of the envelope calculation based on the frequency of inattention accidents, annual death rates, fraction of cars which are Teslas, and the 8 deaths WaPo found over the last 8 years.


The recall justification parallels the WaPo article. Tesla itself respond to that on TwiXter citing stats which are adjacent but not precisely on point:

 
  • Like
Reactions: EVNow
I posted a back of the envelope calculation
Not amenable to back-of-the-envelope calculation. Your calculation doesn’t take into account the situations where AP might be used and whether they are relatively lower (or higher) risk for inattention. Periods where AP is used is not a random sample of driving and is subject to a lot of bias of various sorts.

It’s tricky. Even Missy Cummings was criticized (not just by Tesla folks either) for her methodology when she looked at this general topic. It’s very hard.

Please note I am not arguing that Autosteer Highways has a detrimental impact on safety. I’m arguing that we don’t know and we don’t have any data that can help determine that, either. Even if accidents never occurred when AP was used, we wouldn’t know (it just makes it more likely that it is improving safety in that case - but still not definitive).
 
Last edited:
I’m arguing that we don’t know and we don’t have any data that can help determine that, either.
The problem with this is that we'll never know - since we'll never have perfect data.

Its like vaccine trials - you have to go with the knowledge that very large samples tend to balance out uncontrolled items.
 
Looks like Tesla has been installing older releases after 12/8 according to Teslafi.
Tesla doesn't install updates on vehicles that are listed in TeslaFi. They deliver the update, and then it is up to the owner to schedule and install it. Those updates were likely delivered before the recall, and the owner is just now getting around to installing it. (I don't think the "pending installation" counts have gone up on any of the old versions.)
 
  • Like
Reactions: scottf200
Not amenable to back-of-the-envelope calculation. Your calculation doesn’t take into account the situations where AP might be used and whether they are relatively lower (or higher) risk for inattention. Periods where AP is used is not a random sample of driving and is subject to a lot of bias of various sorts.

It’s tricky. Even Missy Cummings was criticized (not just by Tesla folks either) for her methodology when she looked at this general topic. It’s very hard.

Please note I am not arguing that Autosteer Highways has a detrimental impact on safety. I’m arguing that we don’t know and we don’t have any data that can help determine that, either. Even if accidents never occurred when AP was used, we wouldn’t know (it just makes it more likely that it is improving safety in that case - but still not definitive).
but what you're essentially saying is that since we'll never have perfect data we can't look at it at all.

Every analysis has limitations. The good analysts acknowledge those limitations and try to take them into account.
 
but what you're essentially saying is that since we'll never have perfect data we can't look at it at all.
I’m definitely not saying that.
Every analysis has limitations. The good analysts acknowledge those limitations and try to take them into account.
I definitely would like to see the analysis. We’d have to have data from Tesla on times of day, location, vehicle age, etc. associated with each use of FSD (and whether it is Autosteer or Autosteer City Streets) and then try to match with other vehicles to try to get rid of some of the biases.

It’s tough but it is possible to do some rough analysis I think. I’m not a data analysis guy so I have no idea but I can understand how biases arise and what the basic issues are with trying to draw conclusions from the data (summary) we have.
The problem with this is that we'll never know - since we'll never have perfect data.
There’s no reason to desire perfect data but we clearly need a lot of attributes attached to the data that Tesla has (we don’t even have that data I would point out).
In vaccine trials they are careful to try to get rid of biases!
It’s also typical during actual approvals to have the raw data made available.

There’s absolutely no reason the data needs to be perfect! Can be quite a lot of uncertainty.

I’m not arguing we don’t have perfect data - I’m arguing we don’t have any data.

It’s not like a huge effort needs to be expended to produce this. The data just need to be published and then people can work with it. Tesla’s data is likely the highest quality available anywhere involving details of automotive accidents. The problem will be with the data from other vehicles that one would need to compare to!

If I were forced to guess I would say Autosteer likely improves safety on average. But that is a pure guess based on my experience with FSD v11 (look, back on topic 😂 ).
 
Last edited:
In vaccine trials they are careful to try to get rid of biases!
But we can always point out that they haven't equalized on a zillion things.

I’m not arguing we don’t have perfect data - I’m arguing we don’t have any data.
Actually they have given some data - apparently quite a lot - to NHTSA. NHTSA also publishes data that is given to NHTSA on a regular basis by all OEMs. So, we do have "some" data. All that Tesla / NHTSA can do is to compare billions of miles driven on AP and without AP. They can - if they have info - compare based on Geo or zip code etc. My guess is the differences will all be within margins of errors ...

As to whether Tesla should be more transparent - duh ! (separate topic - Tesla is reluctant to release more data because repeatedly over the last decade media has used any Tesla information to produce Tesla bashing headlines to get clicks).

BTW, here's a story from a long time back. I was talking to the data scientists working in Bing. I was asking how do they make sure in the A/B testing - that there are no biases. They said large samples take care of that. They just make sure its all random.
 
  • Like
Reactions: FSDtester#1
Not amenable to back-of-the-envelope calculation. Your calculation doesn’t take into account the situations where AP might be used and whether they are relatively lower (or higher) risk for inattention. Periods where AP is used is not a random sample of driving and is subject to a lot of bias of various sorts.

It’s tricky. Even Missy Cummings was criticized (not just by Tesla folks either) for her methodology when she looked at this general topic. It’s very hard.

Please note I am not arguing that Autosteer Highways has a detrimental impact on safety. I’m arguing that we don’t know and we don’t have any data that can help determine that, either. Even if accidents never occurred when AP was used, we wouldn’t know (it just makes it more likely that it is improving safety in that case - but still not definitive).
My calculation explicitly assumed only 10% of Tesla driving is on AP. A swag, for sure, but far below my personal use. Also, I used the low end fo the 20% to 50% range of accidents being caused by inattention. These conservative estimates point to WaPo's AP engaged fatality rate of 1/8 of the national average. Increasing either of the above swags proportionally lowers raises the expected fatalities while leaving the actual observed number the same.

The unstated assumption behind NHSTA's recall is that autopilot's inadequate nagging increases the likelihood that a driver will not pay attention. I think maybe that is true, and I'm OK with counting strikes in AP and more visible nags. But Tesla's data and my guestimation both suggest that having autopilot engaged reduces fatalities far, far more, probably because drivers do get distracted, and AP does not. But only if it is engaged, of course, an option which this "safety" recall is tragically supposed to inhibit.

One other point. While the recall appears to have minimal impact on FSD, there are many circumstances where FSD handles driving much better that standard AP does. So porting some of FSD capability into standard AutoPilot would help reduce accidents. I' thinking about stopping for signs and lights as examples where AP could stop even when the driver is ignoring his situation. I believe some of the WaPo accidents involved running stop signs. So, using some FSD tech may well be part of the eventual solution.
 
My calculation explicitly assumed only 10% of Tesla driving is on AP. A swag, for sure, but far below my personal use. Also, I used the low end fo the 20% to 50% range of accidents being caused by inattention. These conservative estimates point to WaPo's AP engaged fatality rate of 1/8 of the national average. Increasing either of the above swags proportionally lowers raises the expected fatalities while leaving the actual observed number the same.

The unstated assumption behind NHSTA's recall is that autopilot's inadequate nagging increases the likelihood that a driver will not pay attention. I think maybe that is true, and I'm OK with counting strikes in AP and more visible nags. But Tesla's data and my guestimation both suggest that having autopilot engaged reduces fatalities far, far more, probably because drivers do get distracted, and AP does not. But only if it is engaged, of course, an option which this "safety" recall is tragically supposed to inhibit.

One other point. While the recall appears to have minimal impact on FSD, there are many circumstances where FSD handles driving much better that standard AP does. So porting some of FSD capability into standard AutoPilot would help reduce accidents. I' thinking about stopping for signs and lights as examples where AP could stop even when the driver is ignoring his situation. I believe some of the WaPo accidents involved running stop signs. So, using some FSD tech may well be part of the eventual solution.
This may well be a great example of disparate expectations of technology vs humans. We're ok with imperfect humans making mistakes that kill people but we're not ok with imperfect technology killing people even if it's better than humans. Another flaw in many people's reasoning is that they're far more accepting of a bad outcome resulting from a lack of action than they are if it results from action. Vaccines are a classic example of this. People see a vaccine reaction that occurs 1-2x in a million doses and use that as an excuse not to prevent a disease that has a 1 in 1,000 mortality rate.
 
My calculation explicitly assumed only 10% of Tesla driving is on AP
Sure but what if people most of the time engage Autosteer only in the 10% of the time when they feel the driving is the most dangerous, crowded, high risk, and when they are feeling sleepy? Then your estimates using the data provided would likely far overestimate the risk of using Autosteer. Maybe it is 100x safer than regular driving?

Anyway no need to discuss this further, I think you understand the point here.

My guess is the differences will all be within margins of errors ...
I mean, yes. Quite possible. But it is a guess. Which is my point. We really don’t need to be guessing so much, or looking at the raw bottom line comparisons. Can be much more data driven here. Otherwise we’re just guessing, and ignoring a trove of high quality Tesla telemetry. And that seems like a shame.
 
Last edited:
I' thinking about stopping for signs and lights as examples where AP could stop even when the driver is ignoring his situation. I believe some of the WaPo accidents involved running stop signs. So, using some FSD tech may well be part of the eventual solution.
Yeah, that is a slippery slope of how much of FSD do you give away for free. Everything you give away reduces the value of actually buying FSD.

I guess you could make it a very unpleasant implementation, in that it doesn't intervene until the last second and locks up the brakes to stop, such that people wouldn't want to rely on it. But that introduces safety risks for getting rear ended.

It will be interesting to see what their implementation is for the recall. Do they just make it require hands on the wheel more when getting near traffic controls, or does it start actively notifying you that you are approaching a stop sign or traffic light. (That would make things safer, and demo that FSD knows where it would need to take action and might encourage more people to buy/subscribe to it.)
 
Yeah, that is a slippery slope of how much of FSD do you give away for free. Everything you give away reduces the value of actually buying FSD.

I guess you could make it a very unpleasant implementation, in that it doesn't intervene until the last second and locks up the brakes to stop, such that people wouldn't want to rely on it. But that introduces safety risks for getting rear ended.

It will be interesting to see what their implementation is for the recall. Do they just make it require hands on the wheel more when getting near traffic controls, or does it start actively notifying you that you are approaching a stop sign or traffic light. (That would make things safer, and demo that FSD knows where it would need to take action and might encourage more people to buy/subscribe to it.)
We had FSD beta before the testing program started. At that point FSD provided stopping at signs and lights, but not yet turns and such on city streets. Enhanced AP provided auto lane changes, parking and summon, if memory serves. Using autopilot off of freeways limited the speed to 5 over the limit, I think. When it stopped at a light or stop sign, we had to tap the accelerator to get going. I think it just continued straight, so turns had to be manual, but I'm not sure. This stopping thing was not particularly useful, but it was safer than the alternative of not stopping, episodes of which made it into the WaPo story and the NHTSA files.

So, if stopping at stops was borrowed from FSD and added to enhanced autopilot or even basic autopilot, it would prevent those awful "blew through the stop" autopilot fatalities which got WaPo, Buttigieg, and NHTSA in such a tizzy. Of course FSD is way overpriced till Robotaxi becomes real and legal. But refiguring the pricing and features should be easy and satisfy the "safety" regulators and decrease accidents and fatalities. Just my thought here.
 
We had FSD beta before the testing program started. At that point FSD provided stopping at signs and lights, but not yet turns and such on city streets. Enhanced AP provided auto lane changes, parking and summon, if memory serves. Using autopilot off of freeways limited the speed to 5 over the limit, I think. When it stopped at a light or stop sign, we had to tap the accelerator to get going. I think it just continued straight, so turns had to be manual, but I'm not sure. This stopping thing was not particularly useful, but it was safer than the alternative of not stopping, episodes of which made it into the WaPo story and the NHTSA files.

So, if stopping at stops was borrowed from FSD and added to enhanced autopilot or even basic autopilot, it would prevent those awful "blew through the stop" autopilot fatalities which got WaPo, Buttigieg, and NHTSA in such a tizzy. Of course FSD is way overpriced till Robotaxi becomes real and legal. But refiguring the pricing and features should be easy and satisfy the "safety" regulators and decrease accidents and fatalities. Just my thought here.
That's pretty much what I remember, too. TACC had a beta 'stop at traffic signal' feature that would require you to click the stalk or tap the accelerator regardless of the light color.

I keep saying this but I still find the logical inconsistency baffling - 'regular' cruise control has been around forever and adaptive cruise has been around for at least 10 years and neither of these would stop for any sort of signal (or at all in the case of 'dumb' cruise,) yet there was no huge outcry that these were dangerous. Now, when there's a system that is better than those but still requires driver attention people are complaining that it's killing people. I don't get it.
 
That's pretty much what I remember, too. TACC had a beta 'stop at traffic signal' feature that would require you to click the stalk or tap the accelerator regardless of the light color.

I keep saying this but I still find the logical inconsistency baffling - 'regular' cruise control has been around forever and adaptive cruise has been around for at least 10 years and neither of these would stop for any sort of signal (or at all in the case of 'dumb' cruise,) yet there was no huge outcry that these were dangerous. Now, when there's a system that is better than those but still requires driver attention people are complaining that it's killing people. I don't get it.
Yeah, I remembered the same thing about the "stop at signal". After using the FSDb for two years now, I can some behaviors are trained good or bad.. On routes that works all the time, I tended to be less attentive/alert towards what it is doing. On routes that failed before, I tended to be super alert. There are cases that FSD disengaged but TACC remains active, and sometimes I didn't realize it at first until I notice it is still going but visualization is different, that's the case I think it can be dangerous because it will not stop at the lights.
 
Last edited:
...There are cases that FSD disengaged but TACC remains active, and sometimes I didn't realize it at first until I notice it is still going but visualization is different, that's the case I think it can be dangerous because it will not stop at the lights.
If you have FSD with Traffic Light and Stop Sign Control, then TACC will stop at signals and stop signs. It just won't steer for you.

If you don't have FSD, then neither TACC nor Autosteer are meant to respond to stop signals in normal driving - however I believe that it will throw a panic alert and apply emergency braking if you have a lapse and you blow through the stop. In the latter emergency case, you can force it to keep going if you apply the accelerator pedal.

At least, the above is how I think it works. I haven't driven without active FSD features for a long time now.
 
This may well be a great example of disparate expectations of technology vs humans.
I love conspiracy theories, but I don't believe them. It is the sociology of them that intrigues.

It looks like to me like there is a cottage industry of bashing Tesla/Musk. It may be a combination of short sellers, frightened competitors, and misinformation clickbait mongers (I'd include WaPo in that category, sadly.) Google actually sends a slice of advertiser $$$ to sites which generate clicks, and "Tesla crashed, caught fire" sure generates a lot of clicks, even if details are similar to James Dean, long before Tesla - one student driver, the other speeding. (Populist politicians also have long taken advantage of the power of what we now call click bate, but Google and social media opened the game to anyone willing to tell a lie in exchange for $$.)

In this environment, the irrationality you describe may be the flavor of bait which rules the attraction.

The fact is that every time we get in a car we take a risk. How soothing it seems to be able to blame a type of car we don't own for the danger... Is this what psychologists call displacement?
 
Looks like 2023.44.30 holiday update with recall includes FSD Beta 11.4.9. Not sure if that means 2023.27.x / 11.4.5+ will be joining those on main releases that have been on 11.4.4?


Compared to 11.4.8 notes, there's an extra entry:
  • Introduced Automatic Emergency Braking on general obstacles detected by Occupancy Network.
Although that entry was previously added in 11.4.5, which main releases haven't gotten yet.

The notes there include the recall/remedy under "Autopilot Suspension" mentioning "one week" as well as "Over-the-Air (OTA) Recall" for #23V-838.