Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
He’s an optimist!
If you assume that necessary disengagements drop by 50% every couple weeks then unsupervised FSD will be ready this year.
Speaking of this year. I believe this is the FIRST year since about 2019 that Elon hasn't made the "...by the end of the year" proclamation. Wonder if there is any significance in that? 🤔 🤪
 
I don't mind being a beta tester of things but when I say the situation to myself as:
"You tried to kill me last time with the old software, will you kill me this time? or did you learn your lesson"
Then being happy when it doesn't kill me
Could you imagine if everyone uses FSD when it's pushed for that free month?
Every non Tesla driver will think all tesla drivers are either an A$$ or drive like their grandma and even more Tesla hate and keyed cars

The problem with saying you have to know it's limitations is that you have to first find its limitations which is the same as saying you have to almost have an accident.
 
1. Even if its actually only half as good as the average human, (so twice as dangerous), it will still crash only once every 125,000 miles...
Why choose an arbitrary MTBF of 125,000 for your analysis? I would think (and I don't make these decisions) that it would never be touted as safer until it can offer a much higher MTBF. So it seems this is kind of moot?

2. "FSD" as a supervised driving aide has a paradox that to me is a deal killer. The assumption is that it will not be perfect, but it will help more than it hurts, and we just need to be prepared to take over if it makes a mistake.
How do we as driver monitoring determine in that split second that the slamming on the brakes is either
1) a legit action to prevent a crash from a threat we didn't see (exactly the value we are supposed to get from it)
or

2) a mistake that FSD is making that I need to over ride so I don't cause a crash? I have to have better than FSD situational awareness at all times, which negates any value FSD could logically provide.
But the MTBF number in any given context (L5, L2, whatever) is still the number, and whatever may or may not have been going on in the human's consciousness at the time of an action (that may or may not end up part of a statistic) seems kinda irrelevant...

I submit that FSD needs to be better than the best human driver ever to even be a good ADAS system.
I think that's already been empirically disproven, no? At least anecdotally, I feel safer already when I'm partnering with FSD, although it isn't necessarily as much fun a lot of the time.
 
The SAE was more or less pressured by those makers into the J3400 standard, making CCS dead in North America.
My understanding is that the Tesla proprietary standard was dropped by Tesla (though still supported on the Supercharger network), and CCS is the protocol going forward (the nice compact Tesla connector was preserved and will become commonplace in future, which is great).

This is sad because it means my vehicle needs a hardware update to work with a NACS-only charger - unlike the vehicles with CCS1 charge ports, which just need a passthrough adapter to work with NACS!

Tesla standard is dead, CCS lives. Tesla has had to retrofit all their stations to support CCS (while still being backwards compatible to older Teslas)!!! 😢 Presumably other NACS providers will NOT be so kind to older Teslas. 😢 So much sadness.

CCS is dead, long live CCS?

Anyway this imminent situation of FSD taking over the world as a “fait accompli” is hopefully not analogous to what happened to the Tesla proprietary standard.
 
Last edited:
Interesting video from Chuck on V12.3.1. In a scenario where pulling up to the V11-projected creep line would actually cause more occlusion than it would solve, V12 seemed to pick the point to which it creeped based on the context of the scene. He's speculating that this kind of occlusion-aware navigation is emergent behavior from the network.

 
1. Even if its actually only half as good as the average human, (so twice as dangerous), it will still crash only once every 125,000 miles. At that point, to the end user, it will be indistinguishable from a perfect driver. Threads like this will cease to exist. Most of us could wear out 2 different Teslas and never crash, and the only evidence that it was still 2x as dangerous would be buried in boring safety statistics published by alphabet soup regulators. Day to day it will feel exactly the same as perfection, yet it will be killing twice as many people as AVERAGE drivers.
Accident != fatality. You can maintain occurance while reducing severity.
 
Interesting video from Chuck on V12.3.1. In a scenario where pulling up to the V11-projected creep line would actually cause more occlusion than it would solve, V12 seemed to pick the point to which it creeped based on the context of the scene. He's speculating that this kind of occlusion-aware navigation is emergent behavior from the network.
I suspect that the Creep Line was more relevant to V11. I don't think it is as much as a "real thing" in V12 as it was in V11 and is now more leftover just eye candy. V11 seemed to make it decisions based on pure logic and V12 is more about what human would do.
 
Accident != fatality. You can maintain occurance while reducing severity.
And no end user could tell the difference.

Presumably accident rates improve at some rate and fatalities improve at some other similar rate. Replace miles/accident with miles per fatality in my point 1 above and the argument is identical. We can only know if its improved by looking at compiled statistics, not how the car drove today, or this month or this year.
 
Last edited:
At least anecdotally, I feel safer already when I'm partnering with FSD, although it isn't necessarily as much fun a lot of the time.
This is exactly my point. You will feel anecdotally safer, when in fact you are not safer. You can feel rainbows farting out your ears and it has the same relationship to how safe FSD is. We lack the ability to judge the safety without comparing billions of miles of safety data.
 
And no end user could tell the difference.

Presumably accident rates improve at the some rate and fatalities improve at some other similar rate. Replace miles/accident with miles per fatality in my point 1 above and the argument is identical. We can only know if its improved by looking at compiled statistics, not how the car drove today, or this month or this year.
I disagree. A fatal crash requires a higher level of driving failure than just a crash. As in, the type to result from inattention or a series of poor choices.
It requires completely missing either a stationary or moving object with a high differential speed.

Not speeding alone cuts risk down. As does not drinking. FSD is good at both of those.
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813473
 
Why choose an arbitrary MTBF of 125,000 for your analysis? I would think (and I don't make these decisions) that it would never be touted as safer until it can offer a much higher MTBF. So it seems this is kind of moot?
Because if the data is not released by Tesla in a transparent, verifiable manner, we have to take their word for it. I wont take anyone's word for anything. I'll look at peer reviewed studies by independent 3rd parties, and then check them myself.

Without the data it will feel perfect, but could still be more dangerous than my 90yr old mom.
 
  • Like
Reactions: AlanSubie4Life
I disagree. A fatal crash requires a higher level of driving failure than just a crash. As in, the type to result from inattention or a series of poor choices.
It requires completely missing either a stationary or moving object with a high differential speed.

Not speeding alone cuts risk down. As does not drinking. FSD is good at both of those.
https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/813473
That's meaningless without data showing FSD is safer. And we don't have it, so cannot judge it.
 
  • Like
Reactions: Ramphex
My understanding is that the Tesla proprietary standard was dropped by Tesla (though still supported on the Supercharger network), and CCS is the protocol going forward (the nice compact Tesla connector was preserved and will become commonplace in future, which is great).

This is sad because it means my vehicle needs a hardware update to work with a NACS-only charger - unlike the vehicles with CCS1 charge ports, which just need a passthrough adapter to work with NACS!

Tesla standard is dead, CCS lives. Tesla has had to retrofit all their stations to support CCS (while still being backwards compatible to older Teslas)!!! 😢
Note they didn't retrofit all stations. V2s remain incompatible and not all V3s were retrofitted. The V3 retrofit appears to be for the plug-and-charge. It's not entirely certain if V3s were not already CCS compatible in some way (for example when they did the Magic dock that uses the app for third party authorization).
Presumably other NACS providers will NOT be so kind to older Teslas. 😢 So much sadness.

CCS is dead, long live CCS?

Anyway this imminent situation of FSD taking over the world as a “fait accompli” is hopefully not analogous to what happened to the Tesla proprietary standard.
That's a pretty negative spin. The alternative is the only third party NACS support being 50kW CHAdeMO based adapters. That's a way worst situation than using the CCS protocol, which they had to do anyways for cars to use CCS adapters. Plus Tesla was already using CCS protocol Europe. The vast majority of Teslas today on the road support CCS and this saves them from having to get a CCS adapter. Plus to get manufacturers to switch, there has to be some support for existing cars.
 
2) a mistake that FSD is making that I need to over ride so I don't cause a crash? I have to have better than FSD situational awareness at all times, which negates any value FSD could logically provide.
You just succinctly put into words why I don't use FSDb on anything but divided limited access highways. Since where I live there are only 2 of those, only one of which is part of my 'at home' driving, I seldom use FSDb.
 
  • Like
Reactions: Ramphex and OxBrew
This is exactly my point. You will feel anecdotally safer, when in fact you are not safer. You can feel rainbows farting out your ears and it has the same relationship to how safe FSD is. We lack the ability to judge the safety without comparing billions of miles of safety data.
But I was under the impression that the data in fact exists, that there are many fewer collisions or incidents or however they measure it when it's supervised FSD vs. human only. Hasn't Tesla been saying that already for years?
[EDIT: Maybe it was w/r/t TACC or AP or something pre-"FSD" but still.]
 
That's meaningless without data showing FSD is safer. And we don't have it, so cannot judge it.
So then what is the "fatal flaw"?
There are 2 really big issues with FSD approaching human levels of driving. I think they are both fatal flaws.

Day to day it will feel exactly the same as perfection, yet it will be killing twice as many people as AVERAGE drivers.
You are assuming it improves all accident rates equally when the data shows disproportionate links to certain behaviors and certain drivers.
 
Note they didn't retrofit all stations.
I know.
That's a pretty negative spin.
Yeah. To be clear, as long as my car works at 3rd-party NACS DC chargers with the Tesla connector without needing a charge port update, I am not complaining, and I would agree Tesla won, by adopting CCS protocol. Otherwise, I would have to say my car has been left in the past because it uses the obseleted Tesla-proprietary standard. It’s not clear to me what will happen (it would make no sense to not support older Teslas, since Tesla does). I would hate to plan a trip using a remote NACS charger as the only option and not be able to use it!


But I was under the impression that the data in fact exists, that there are many fewer collisions or incidents or however they measure it when it's supervised FSD vs. human only. Hasn't Tesla been saying that already for years?
No, there is zero data. Nothing has been published. It’s very frustrating! (Tesla does continue on occasion to publish statistics with obvious massive selection bias.)
 
Last edited:
Do people generally leave a message just because it asked instead of ignoring and just parking right away? I figured no message makes it lower priority or assumed nothing wrong, so spending time to leave a message probably results in even more time to process it. There are cases where 11.x and 12.x does something wrong close to the destination, e.g., turning too early, so it's useful to be able to provide that feedback instead of the old behavior of assuming the trip was completed correctly.
I leave a message when I can. Sometimes I'm busy and don't get to it before the prompt disappears. I'm expecting there is auto labeling happening so I'm not making extra work for a human.
 
  • Like
Reactions: FSDtester#1