Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD is a fraud

This site may earn commission on affiliate links.
I may have missed something, but my MY has had two recalls since I've purchased it, both from NHTSA investigations. One was for seat belt warning chimes and the other about rolling stops (in FSD Beta).
NHTSA is only allowed to "regulate" AFTER something happens. We treat cars in the USA very differently than other safety critical products.

For aircraft and medical, the regulator is involved during the design. They audit the product, process, and adherence to laws. If the product doesn't align with existing regulations, work has to be done to figure out how it will be proven to be safe. All of this must happen before it can be offered for sale. The regulator then continues to work with the product throughout the lifecycle.

NHTSA is not allowed to audit products before release. They don't even have to be told they exist. They wait until the public is put at sufficient risk before they can act. Then they must negotiate with the manufacturer to determine a resolution process, which at times gets very drawn out and contentious. During that time, the product remains on the market.

This is how Tesla can ship "full self driving" which supposedly follows road laws while also having specifically coded it to run stop signs. Because they can try whatever they want and see if NHTSA eventually stops it, and there is no real cost to acting poorly. Meanwhile, in Aviation and Medical, you can't start getting revenue until the regulator approves it (although Tesla hacked even that with their pre-sales of FSD).

It's very odd that the US rules came up this way, but it is what it is. But it's also helpful for people to understand this difference. Maybe you believe this is sufficient regulation for vehicles and effective at keeping the public safe, and that it's OK to wait for cars to fail in consumer's hands before fixing them. I'd be very surprised if people felt the same way about medicine and airplanes, and it might be interesting to think about why that is, and if maybe the modern world of self driving and OTA updates to a million cars overnight is making the risk more like medical than legacy automotive.
 
Or was the inherent caution exhibited by someone with the risk of great financial, legal, and physical risk enough for them to intervene before anyone was hurt?
You said it had never happened to you. Just pointing out that it does happen.
Plus, are you arguing that a system which fails but is successfully overridden because the user is at great risk is just as safe as one that didn't fail at all because it was more thoroughly tested and developed before release?
 
...There have been recent discussions on these forums about the number of accidents on Tesla AP/NoA/FSD in relation to the number of cars on the road, miles driven, etc., on the most recent June 2022 NHTSA report. One interesting thing found in that report is that Tesla is one of the few companies that gives very complete data, due to the telemetry data being kept on the vehicle and sent to Tesla remotely. The report showed that there were other companies that relied almost solely on user reports, or the telemetry wasn't available after the incident.

To a casual reader, this sort of implies that Tesla provided the most complete data, which is certainly not true. If you include robotaxi companies (not auto manufacturers - sure they are probably not as complete) they have a lot more complete reporting than Tesla, as far as I can tell. That is one of the reasons the numbers you calculated end up with such a high percentage of Waymo/Cruise vehicles involved in accidents (in general I’d expect this number to be over 100% (the way it was calculated) eventually (and quite quickly), while going over 100% considerably more slowly for Tesla for obvious reasons).

There were two different reports - one for ADAS and one for ADS. The report I was referencing was the ADAS report, in which Tesla had more data than most other companies. ADS on the other hand, you are totally correct.
(My bold highlighting)


Tesla has by far the most redacted information, so I wouldn't call it very complete. Notably the "Narrative" is redacted for every ADAS report, so we lose much of the context for what happened. Not very open and forthcoming.

Companies who redacted the narrative:
APTIV
Ford (most reports)
Lucid
Subaru
Tesla

The companies not redacting the narrative:
BMW
Ford (some reports)
General Motors
Honda
Hyundai
Porsche
Toyota
Volkswagen
 
  • Informative
Reactions: AlanSubie4Life
NHTSA is only allowed to "regulate" AFTER something happens. We treat cars in the USA very differently than other safety critical products.

For aircraft and medical, the regulator is involved during the design. They audit the product, process, and adherence to laws. If the product doesn't align with existing regulations, work has to be done to figure out how it will be proven to be safe. All of this must happen before it can be offered for sale. The regulator then continues to work with the product throughout the lifecycle.

NHTSA is not allowed to audit products before release. They don't even have to be told they exist. They wait until the public is put at sufficient risk before they can act. Then they must negotiate with the manufacturer to determine a resolution process, which at times gets very drawn out and contentious. During that time, the product remains on the market.

This is how Tesla can ship "full self driving" which supposedly follows road laws while also having specifically coded it to run stop signs. Because they can try whatever they want and see if NHTSA eventually stops it, and there is no real cost to acting poorly. Meanwhile, in Aviation and Medical, you can't start getting revenue until the regulator approves it (although Tesla hacked even that with their pre-sales of FSD).

It's very odd that the US rules came up this way, but it is what it is. But it's also helpful for people to understand this difference. Maybe you believe this is sufficient regulation for vehicles and effective at keeping the public safe, and that it's OK to wait for cars to fail in consumer's hands before fixing them. I'd be very surprised if people felt the same way about medicine and airplanes, and it might be interesting to think about why that is, and if maybe the modern world of self driving and OTA updates to a million cars overnight is making the risk more like medical than legacy automotive.
This is a very interesting point. I've never thought about why the NHTSA isn't included during and before the launch of each vehicle. That would make sense!

However, and I may be wrong about this, don't medical companies use test animals and human trials before it's approved (based on the results of animal and human trials)? Also, didn't most aircraft in the past (not since simulators) require test pilots (humans) in order to work out the kinks? If this still happens today in the medical field and in avionics, doesn't that mean FSD beta is doing what's "normal"?

They started with a very select group of internal employees on the first Beta versions, then expanded to a few select customers (under NDA), to a few thousand paying customers (based on safety score), then to more people (based on safety score). This sounds like a very purposeful and well thought out plan. However, they are not past "human trials" yet because they haven't released this to just anyone that has a Tesla and pays for the software (why it's still a Beta).

If no incidents, why not continue to expand? If it's proving to be helpful to add more data/drivers, why not continue to expand? NHTSA has all crash data related to Teslas, or what's required by law to collect/share anyway. They can "open an investigation" like they do at any time to try and stop the use of the software.

I'll stop talking now :)
 
  • Like
Reactions: pilotSteve
This is a very interesting point. I've never thought about why the NHTSA isn't included during and before the launch of each vehicle. That would make sense!

However, and I may be wrong about this, don't medical companies use test animals and human trials before it's approved (based on the results of animal and human trials)? Also, didn't most aircraft in the past (not since simulators) require test pilots (humans) in order to work out the kinks? If this still happens today in the medical field and in avionics, doesn't that mean FSD beta is doing what's "normal"?

They started with a very select group of internal employees on the first Beta versions, then expanded to a few select customers (under NDA), to a few thousand paying customers (based on safety score), then to more people (based on safety score). This sounds like a very purposeful and well thought out plan. However, they are not past "human trials" yet because they haven't released this to just anyone that has a Tesla and pays for the software (why it's still a Beta).

If no incidents, why not continue to expand? If it's proving to be helpful to add more data/drivers, why not continue to expand? NHTSA has all crash data related to Teslas, or what's required by law to collect/share anyway. They can "open an investigation" like they do at any time to try and stop the use of the software.

I'll stop talking now :)
Here’s an idea. Maybe the NHTSA can just step in and stop Tesla in one big hammer down.

FSD gone…. Just like that.
 
NHTSA is only allowed to "regulate" AFTER something happens. We treat cars in the USA very differently than other safety critical products.

For aircraft and medical, the regulator is involved during the design. They audit the product, process, and adherence to laws. If the product doesn't align with existing regulations, work has to be done to figure out how it will be proven to be safe. All of this must happen before it can be offered for sale. The regulator then continues to work with the product throughout the lifecycle.

NHTSA is not allowed to audit products before release. They don't even have to be told they exist. They wait until the public is put at sufficient risk before they can act. Then they must negotiate with the manufacturer to determine a resolution process, which at times gets very drawn out and contentious. During that time, the product remains on the market.

This is how Tesla can ship "full self driving" which supposedly follows road laws while also having specifically coded it to run stop signs. Because they can try whatever they want and see if NHTSA eventually stops it, and there is no real cost to acting poorly. Meanwhile, in Aviation and Medical, you can't start getting revenue until the regulator approves it (although Tesla hacked even that with their pre-sales of FSD).

It's very odd that the US rules came up this way, but it is what it is. But it's also helpful for people to understand this difference. Maybe you believe this is sufficient regulation for vehicles and effective at keeping the public safe, and that it's OK to wait for cars to fail in consumer's hands before fixing them. I'd be very surprised if people felt the same way about medicine and airplanes, and it might be interesting to think about why that is, and if maybe the modern world of self driving and OTA updates to a million cars overnight is making the risk more like medical than legacy automotive.
It's acceptable because government agencies simply don't have enough personnel anyways (it's done the same way with FAA too, they do not have dedicated personnel to evaluate everything, that is why Boeing was able to do self certification).

The other problem is that the government generally is clueless about automated driving of all sorts. Them trying to introduce all sorts of regulations while the players are trying to innovate will just put roadblocks that makes development much slower (while not necessarily helping safety). You can see this in a lot of inane laws in Europe that banned certain features even in EAP.

Currently Tesla is testing FSD as a L2 feature. Testing of L2 features do not need any notification of the public nor any special training and never has. Tesla is doing enough driver monitoring to ensure it qualifies as L2. Heck, even L3+ features don't need notification of the public, even in the stricter jurisdictions (they just need a license for testing that, but the public has zero warning or say in that).
 
If AP is so safe, the very last thing you should do to an "unsafe driver" is disable autopilot.
This is a very good point and Telsa’s choices here can be debated. Arguably if a driver is not paying attention that is when the system should REALLY increase its vigilance and start doing its job. Whether it should do that for FSDb is another question of course (probably not! -just fail back to baseline AP). But for your AP specific point it’s a good one. I’m sure they have thought about it but some of the decisions seem mysterious.

Without FSDb I’ve been in lane-keep-assist jail (not AP jail) a few times because to safely pass you need to get close to the center line to see around vehicles. It’s ridiculous that I occasionally lose lane-keep assist in that scenario (I would have to signal every time I peeked out to work around this). They’re trying to prevent abuse, of course, but I think there are better approaches to improve safety. It’s bad when non-abusive behavior (when I am clearly actively providing steering input and not using AP or TACC!) results in safety feature loss which requires going to park to solve. Hopefully they can come up with a better way when their features and monitoring improve.
 
This is a very good point and Telsa’s choices here can be debated. Arguably if a driver is not paying attention that is when the system should REALLY increase its vigilance and start doing its job. Whether it should do that for FSDb is another question of course (probably not! -just fail back to baseline AP). But for your AP specific point it’s a good one. I’m sure they have thought about it but some of the decisions seem mysterious.

Without FSDb I’ve been in lane-keep-assist jail (not AP jail) a few times because to safely pass you need to get close to the center line to see around vehicles. It’s ridiculous that I occasionally lose lane-keep assist in that scenario (I would have to signal every time I peeked out to work around this). They’re trying to prevent abuse, of course, but I think there are better approaches to improve safety. It’s bad when non-abusive behavior (when I am clearly actively providing steering input and not using AP or TACC!) results in safety feature loss which requires going to park to solve. Hopefully they can come up with a better way when their features and monitoring improve.
Seems reasonable if you exceed speed restrictions, it should alarm with red wheel and give you a few seconds to let off the throttle and return to speed. If you don't then disengage as you are obviously not paying attention. Though how does it handle people having a medical emergency and slumped down with pressure on the accelerator?
 
Tesla has by far the most redacted information, so I wouldn't call it very complete. Notably the "Narrative" is redacted for every ADAS report, so we lose much of the context for what happened. Not very open and forthcoming.

I found the report to be unusable for both Tesla and Honda.

Tesla had vehicles uploading the accident data automatically for most of the accident, but we had zero insight into what the customers actually experienced.

Honda only had user reports and I'm highly suspect of the truthfulness of a lot of them.
 
I more or less have been here for the last 2-3 years. You see some progress, but the more I have it the more I can't actually imagine it becoming a legit product (Full Self Driving). More like significant assisted driving where you have to still put a lot of input in.

I still tend to think the current space is like the worst situation to be in. The car does enough to act like it's driving but still requires you to engage, provide input, albeit often secondarily/be prompted to do so, which I honestly think is the worst. I've said it before to my car, but you can't make the decision yourself then you shouldn't be allowed to make any decisions in the first place.
I think the current space is far from the worst. The worst will be when it drives without error for a month and then catches you off guard. With a once a month failure rate there are going to be many people who go an entire year without an intervention, how attentive will they be?
Progress does seem painfully slow. I thought Tesla would be there by now.
 
  • Funny
Reactions: AlanSubie4Life
However, there is far from proof that Tesla is gathering any useful data from these drives, particularly when FSD is engaged. Show me a report from Tesla explaining how much faster they are getting to actual self driving because people are using FSD on the road right now? Show me an analysis that FSD is developing much faster now than it did before FSD went "public." And the idea Beta testers work "with" Tesla? LOL. Have you ever tried to discuss any kind of bug you found with Tesla? It's a one way black hole.
If, as you claim, they aren't using any of the data, then why are they bothering with the beta at all? By your logic, they have risked the wrath of regulators, and endangered the non-consenting public .. to do nothing useful. Overall, you seem to be angry at Tesla, but I'm not clear why, if in fact the entirely of the FSD beta program is as fabricated as you seem to claim (though without any evidence, which is a concern).
 
However, and I may be wrong about this, don't medical companies use test animals and human trials before it's approved (based on the results of animal and human trials)? Also, didn't most aircraft in the past (not since simulators) require test pilots (humans) in order to work out the kinks? If this still happens today in the medical field and in avionics, doesn't that mean FSD beta is doing what's "normal"?
Yes, of course medical and aviation test before release. However, those are internal, limited tests that do not expose the public. You test fly aircraft only with trained test pilots, and you do it in unpopulated areas. You test medicine on informed test subjects, and you monitor them closely. Your test plans are well developed and engineered long before you run the test.

You don't get to sell the product to the public until these tests prove that the product is done, and acceptably safe for ANYONE that wants to buy it. The test does not continue with your paying customers. In fact, the FAA has a baseline that they assume you go bankrupt right after you sell the product, so it must be safe for long term use even without continued manufacturer support.

Very different than Tesla who is just throwing the product at the public, calling it "beta" and asking "so, how is it working for you?". The very "Beta" moniker tells you all you need to know- the FAA and FDA do not allow you to sell beta products to the public.

Arguably if a driver is not paying attention that is when the system should REALLY increase its vigilance and start doing its job. Whether it should do that for FSDb is another question of course (probably not! -just fail back to baseline AP). But for your AP specific point it’s a good one. I’m sure they have thought about it but some of the decisions seem mysterious.
Like I said, staying on is the most rational decision if you know your system is safer than a human. The fact that they disable it completely for inattention tells me that they know AP and "FSD" are WAY less safe than a human, and thus need extraordinary vigilance. Letting someone use it when not paying attention is actually less safe than having it at all. But yeah, driving the car with nobody paying attention is right around the corner....

If, as you claim, they aren't using any of the data, then why are they bothering with the beta at all? By your logic, they have risked the wrath of regulators, and endangered the non-consenting public .. to do nothing useful.
Like I said, marketing. A huge part of Tesla's value is based on the idea that they are leaders in autonomy. It's been over 2000 days now since FSD was introduced. Tesla NEEDS to show progress. They need to stay in the investor's eyes as progressing. They need to avoid getting sued for charging thousands of dollars for something that they never delivered despite it "coming this year". That makes the other risks worth it.

Overall, you seem to be angry at Tesla, but I'm not clear why, if in fact the entirely of the FSD beta program is as fabricated as you seem to claim
I did not claim all of FSD is fabricated. My anger at Tesla is for pre-selling FSD long before it was a product. Tesla sold me a car as having "all hardware for FSD" yet they want $1,000 to upgrade the hardware in it to do what FSD cars can do. They charged others up to $12,000 with a fuzzy definition of the product and pressure that it would be done soon at a much higher price. They offer no refunds even when they don't deliver. That's the "scam" part the OP originally brought up, not that there is no FSD development being done.

Have you ever been involved in a medical device audit? I suspect you would be somewhat surprised and shocked.
My wife manages a regulatory compliance team at a medical device manufacturer and I have a broad career involved with FAA certification.
 
My anger at Tesla is for pre-selling FSD long before it was a product. Tesla sold me a car as having "all hardware for FSD" yet they want $1,000 to upgrade the hardware in it to do what FSD cars can do.
I'm fairly convinced we will see another wave of sensor suite & compute upgrades in order to see any dramatic FSD improvements.
If supply chain wasn't hell right now they'd probably be doing it...
 
  • Like
Reactions: DanCar and 2101Guy
I'm fairly convinced we will see another wave of sensor suite & compute upgrades in order to see any dramatic FSD improvements.
We already know they are working on HW4. Everyone is excited because they assume it means big progress, and a free upgrade to their car.

Wait until they realize that they aren't owed an upgrade. Since 2019, all Tesla has advertised is city streets autosteer. which FSD beta meets. Development stops on HW3, just like it has on HW2/2.5, even when the HW is capable. Development moves to HW4 cars, which get better, and someday Tesla starts selling something named differently. Only to HW4 cars. On top of "FSD." Do they sell a HW3 to HW4 upgrade? Who knows....

I bet their "you knew exactly what you were buying into, how could you be so stupid to think HW2 would get to FSD, of course they want to charge for HW upgrades, of course FSD is just L2 FOR NOW" attitude will change then.
 
Last edited:
In which case you are aware that compliance is primarily an audit of processes rather than any oversight of actual functionality.
Not my wife's experience at all in diagnostic and treatment devices, which require significant trials to prove that the devices do not cause harm, and do what they say with sufficient accuracy to be useful.

But even with that- the audit of processes is the audit of things like "did you do a safety review?" "To what standards did you base that review?" "Does your product break the law?" "Did you inform your testers they were being tested upon?" "You changed your hardware, are you sure the product doesn't shock people now?" "Did you do a Functional Hazard Analysis?". Yeah, it's just auditing "processes" but those processes are what prove the system is safe to release on the public.

One thing the FAA and FDA absolutely require- User manuals. How is your user supposed to safely use your product without a manual that describes the capability, limitations, user interface, and way to use your product?

Now, show me Tesla's manual for FSD... A product so dangerous that it can only be handed to the safest of drivers, yet those drivers are not given any kind of manual at all, just a disclaimer. But remember:

1657150042104.png


It's weird that they have a whole manual for the car, and basic AP, but not FSD. I guess that's because manuals are legally required, but they are just stupid regulatory requirements that slow companies down, not the kind of thing you do when safety is first.
 
Last edited:
We already know they are working on HW4. Everyone is excited because they assume it means big progress, and a free upgrade to their car.

Wait until they realize that they aren't owed an upgrade. Since 2019, all Tesla has advertised is city streets autosteer. which FSD beta meets. Development stops on HW3, just like it has on HW2/2.5, even when the HW is capable. Development moves to HW4 cars, which get better, and someday Tesla starts selling something named differently. Only to HW4 cars. On top of "FSD." Do they sell a HW3 to HW4 upgrade? Who knows....

I bet their "you knew exactly what you were buying into, how could you be so stupid to think HW2 would get to FSD, of course they want to charge for HW upgrades, of course FSD is just L2 FOR NOW" attitude will change then.
Well, the good thing about all this discussion is we will know soonish (now or years away) if this is all correct. Unlike theoretical physics where you may live 100 years and never "know" if your hypothesis was correct :).

I'm just excited for progress, regardless. I do feel for people that were sold something they feel didn't match expectations. That's never a good thing. I'm used to this type of thing as a gamer... can't tell you the last time I believed what a developer said until I get it in my hands (Cyberpunk2077)...haha.
 
  • Like
Reactions: CyberGus