Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD is a fraud

This site may earn commission on affiliate links.
If Waymo test drivers were caught doing what some FSD Beta users do they would be fired immediately.
It blows my mind how many FSD Beta testers publish videos of themselves breaking the law on YouTube. If they get in a collision later that can be used as evidence against them in a civil or criminal trial...
The 5th amendment does not protect you if you choose to incriminate yourself!
How do they compare to the idiots already on road in their uninsured unsafe vehicles ?
 
That's funny. First you state that Tesla 'needs' to disclose their trade secrets. Now you that they don't. Make up your mind.
No, I said that IF Tesla wants us to take seriously that FSD "beta" is about them collecting data, they should prove it. They are welcome to not prove it, and without that data, I am welcome to be skeptical that this is actually a productive testing exercise, not marketing.

BTW, can you actually get FSD in France? If not, is it because your regulators don't allow it yet because Tesla can't prove it's safe enough?

My opinion (perhaps random too) is that autonomous cars will end up significantly safer than human drivers .. not because they are super-smart but simply because they wont do all the stupid things humans do that cause most accidents (drinking, texting, falling asleep etc). So is that ultimate goal worth some level of risk today?
Yes, we all want autonomous cars to reduce deaths. But the balance there is do we want a single company to decide how much risk to take on the way there with no public auditing? Do we want them to be able to do it without producing any public data on the risk/reward of this public operation?

Tesla has already indicated that FSD is a risk. They require a "safety score" (their name) in order to get "FSD." Hence, we know for a fact that they are putting the public at some increased risk.
What we don't know: How did they evaluate the acceptable risk? How does their tool identify low risk testers? What benefits is Tesla getting from the risk they are taking? What benefits is society getting? Are there alternatives to the risk?

This is exactly the role regulators take all over the USA. The FAA, FDA, EPA, etc all do this all the time. They audit that what a private company is doing is a reasonable risk/reward trade. Yet when it comes to Tesla and FSD, it seems many people want to allow them to make all decisions internally and opaquely, and believe that public FSD beta testing must be some magic tipping point on the way to autonomous cars that are safer than humans, but we have zero proof of that, and Tesla isn't even trying to argue that.
 
  • Like
Reactions: Daniel in SD
No, I said that IF Tesla wants us to take seriously that FSD "beta" is about them collecting data, they should prove it. They are welcome to not prove it, and without that data, I am welcome to be skeptical that this is actually a productive testing exercise, not marketing.

BTW, can you actually get FSD in France? If not, is it because your regulators don't allow it yet because Tesla can't prove it's safe enough?


Yes, we all want autonomous cars to reduce deaths. But the balance there is do we want a single company to decide how much risk to take on the way there with no public auditing? Do we want them to be able to do it without producing any public data on the risk/reward of this public operation?

Tesla has already indicated that FSD is a risk. They require a "safety score" (their name) in order to get "FSD." Hence, we know for a fact that they are putting the public at some increased risk.
What we don't know: How did they evaluate the acceptable risk? How does their tool identify low risk testers? What benefits is Tesla getting from the risk they are taking? What benefits is society getting? Are there alternatives to the risk?

This is exactly the role regulators take all over the USA. The FAA, FDA, EPA, etc all do this all the time. They audit that what a private company is doing is a reasonable risk/reward trade. Yet when it comes to Tesla and FSD, it seems many people want to allow them to make all decisions internally and opaquely, and believe that public FSD beta testing must be some magic tipping point on the way to autonomous cars that are safer than humans, but we have zero proof of that, and Tesla isn't even trying to argue that.
You can buy FSD in France...but there is no beta program....so far..
 
  • Like
Reactions: pilotSteve
No idea. Not sure about France but it is illegal to drive uninsured or unsafe vehicles here...
Obviously it’s illegal to be illegal on the roads here...but they constitute the greater risk than FSD...in fact the beta program ‘might’ make the roads safer...and I look forward to the introduction of beta in France
 
You can buy FSD in France...but there is no beta program....so far..
Which, according to Elon, is because the regulators won't allow it:

Of course, he said that about robotaxis in 2020 too...

in fact the beta program ‘might’ make the roads safer
Can you please explain how one might even argue that current FSD is safer than a human driver and why Tesla requires a high safety score to use it if it's safer? If it really made things safer, why isn't Tesla advertising this? Why aren't they giving it to everyone that has paid which is the ethical thing to do for safety enhancing technologies?
 
Which, according to Elon, is because the regulators won't allow it:

Of course, he said that about robotaxis in 2020 too...


Can you please explain how one might even argue that current FSD is safer than a human driver and why Tesla requires a high safety score to use it if it's safer? If it really made things safer, why isn't Tesla advertising this? Why aren't they giving it to everyone that has paid which is the ethical thing to do for safety enhancing technologies?
Tesla has already removed one feature last week...the car can no longer take you to an exit ramp..because of European law. Yet they are creating new laws to enable the use of Mercedes L3...I suppose the power of the German automakers in Europe is paramount.
Since the Beta program is probably the only active large scale monitoring system of driving habits on US roads, and is about to go international, I say it can only be a force for good...even if a minority can game the system
 
Can you please explain how one might even argue that current FSD is safer than a human driver and why Tesla requires a high safety score to use it if it's safer? If it really made things safer, why isn't Tesla advertising this? Why aren't they giving it to everyone that has paid which is the ethical thing to do for safety enhancing technologies?
Because the only plausible mechanism by which FSD Beta could make driving safer is to put people in a hyper vigilant state of fear that it will steer into oncoming traffic.:p
They could also be concerned about the perception that it is unsafe though Elon has said he doesn't care about that.

It will be interesting to see if it's ever approved in Europe (Elon has said that it does require approval there). Tesla should make a 130kph L3 highway system since there is a regulatory framework for that.
 
Since the Beta program is probably the only active large scale monitoring system of driving habits on US roads
You are blissfully unaware of the large number of insurance companies in the USA that do collect driver data to set insurance rates and just how mature this process is. This is nothing new in the USA, it has been going on for a decade, and even if every Tesla joined this, it would not come close to doubling the amount of driver monitoring. A lot of stuff Tesla does is nowhere near as novel as people think it is.

It's also illegal in California because it is of dubious value and can easily end up being biased. It's also not tied to any statistical change in overall road safety in places it is used.
I say it can only be a force for good...even if a minority can game the system
Disagree, because people gaming it may do things like avoid braking for yellow lights to not get a score reduction. But even if it's a force for good, you have to balance that against the risk FSD causes.
 
Last edited:
  • Like
Reactions: pilotSteve
Why aren't they giving it to everyone that has paid which is the ethical thing to do for safety enhancing technologies?
Darn, you’ve just made it clear to me why my ulterior motive for buying FSD (I was hoping for better background safety features a few years down the road to prevent me from crashing inadvertently) is a pipe dream. (However, I have to say I have derived significant entertainment value from the beta, though I have not been able to monetize it with social media!)

Tesla won’t develop better active background safety features in a timely manner (they’ll just follow the industry), because it reduces the value add of FSD. In the end people just want to not get in accidents (and some users are hoping FSD can help with that, not just drive itself) and if they can get an awesome background safety package (which ethically would be included for free and in fact I think Elon has implied that they will be), why bother with adding FSD? Those users would already be getting most of what they want!


Darn. (This is aside from the question of whether such safety feature additions are even possible with current hardware without crippling false positives. Also aside from the question of whether an extremely capable background safety package would lead to risk compensation (tend to think it would not if implemented correctly).)
 
Yes, we all want autonomous cars to reduce deaths. But the balance there is do we want a single company to decide how much risk to take on the way there with no public auditing? Do we want them to be able to do it without producing any public data on the risk/reward of this public operation?

Tesla has already indicated that FSD is a risk. They require a "safety score" (their name) in order to get "FSD." Hence, we know for a fact that they are putting the public at some increased risk.
What we don't know: How did they evaluate the acceptable risk? How does their tool identify low risk testers? What benefits is Tesla getting from the risk they are taking? What benefits is society getting? Are there alternatives to the risk?

This is exactly the role regulators take all over the USA. The FAA, FDA, EPA, etc all do this all the time. They audit that what a private company is doing is a reasonable risk/reward trade. Yet when it comes to Tesla and FSD, it seems many people want to allow them to make all decisions internally and opaquely, and believe that public FSD beta testing must be some magic tipping point on the way to autonomous cars that are safer than humans, but we have zero proof of that, and Tesla isn't even trying to argue that.
There isn't a single company, that I'm aware of, deciding how much risk to take. There are multiple companies testing various SAE levels on public roads, all under the regulations of various agencies. They are required to report incidence to NHTSA, and there are reports from NHTSA on that data. There have been recent discussions on these forums about the number of accidents on Tesla AP/NoA/FSD in relation to the number of cars on the road, miles driven, etc., on the most recent June 2022 NHTSA report. One interesting thing found in that report is that Tesla is one of the few companies that gives very complete data, due to the telemetry data being kept on the vehicle and sent to Tesla remotely. The report showed that there were other companies that relied almost solely on user reports, or the telemetry wasn't available after the incident.

How did they evaluate the acceptable risk?
Like other companies testing various SAE levels, the primary goal (prime directive if you will) is safety. Don't hit things and follow the traffic laws. They start in simulations, then move to test tracks, then move to limited locations with select testers (employees), then when they have all the data they can collect from controlled locations, they move the test to uncontrolled locations (either with safety drivers for higher SAE levels, or public testers for lower SAE levels). They collect data either with our knowledge (pressing a report button) or without (telemetry is automatically uploaded to Tesla). If an increase in incidence occurs after an update, rollout of software halts, and in some cases is rolled back. We've seen this with previous updates.

How does their tool identify low risk testers?
Much discussion has been had regarding the Safety Score program. Some think it's an excellent tool, and other think it's a charade. I think the answer lies somewhere in the middle (like most things). The Safety Score program prepares people for paying more attention to their driving habits. People become hyper-aware of the metrics being monitors, such as following distance and aggressive stopping/turning. The most common phrase you hear is "driving like a grandma". Does it really gauge good vs bad drivers? I can't answer that, except to say that it forces people to watch their driving more than they would normally. They have to do so for some number of miles for some amount of time. In some cases, it may change habits that some people had. It also weeds out some drivers - I've read many on the forums who have shown disgust with the program and opted out so they could drive normally. So, the Safety Score has had some effect. On the other side, I also believe it's there for regulatory scrutiny. There must be some CYA for any company testing hardware/software in public. Regulations, currently, do not require a safety driver for L2 testing. But I'm sure there must be some gatekeeping that has to occur to satisfy watchful eyes.

What benefits is Tesla getting from the risk they are taking?
All but the most fervent naysayers will admit that the FSD Beta program has improved over time. Regressions do occur, but are usually ironed out in a future update, and new issues emerge. Then they are ironed out. Phantom Braking is getting better for many people, and confidence on turns is increasing. The benefit Tesla gets is data. They get to see how their code and neural nets are working in the real world. And they make adjustments and tweaks to the software/hardware based on the data they've received. Their end goal is to have a product that drives safer than a human, with minimal human interaction. And, obviously, charge a price for this feature.

What benefits is society getting?
Cars that drive safer than a human. Less accidents. Less deaths. And possibly, less traffic.

Are there alternatives to the risk?
There are always alternatives. We stay with horse and buggy and never get the next technological advancement. Or we can slow down progress and take our time getting there - perhaps we'll have FSD in 50 or so years. But mankind is on a road of discovery, and I don't think it's going to slow down.
 
You are blissfully unaware of the large number of insurance companies in the USA that do collect driver data to set insurance rates and just how mature this process is. This is nothing new in the USA, it has been going on for a decade, and even if every Tesla joined this, it would not come close to doubling the amount of driver monitoring. A lot of stuff Tesla does is nowhere near as novel as people think it is.

It's also illegal in California because it is of dubious value and can easily end up being biased. It's also not tied to any statistical change in overall road safety in places it is used.
Yes we have those things in Europe and the apps that go with it...but the insurance company or the apps can’t control the car
 
Fraud in this case must be in the eye of the beholder ( or the expectation of the beholder, or a lawyer of the beholder).. meanwhile I guess I have one of the only cars that actually works right, my FSDb sessions are nearly perfect ( better than my expectation actually), I don't flaunt them on you tube ( they'd be boring) easily Level 3 per the definition. So for me I got my moneys worth ( I paid 4k after having EAP for a couple of years, saleman said dont buy it yet it doesn't really do anything yet circa 2019 ) so far with hopefully more to come. \
as it is a very loosely worded " beta" ( more of an Alpha plus) and we've all known the Term FSD is a misnomer ( as was Autopilot). what I think should happen is sales, and the web site should have a " are you really sure?... multi entry button, that warns the buyer what they are getting again and again... therefore it is Buyer beware, but is certainly not fraud. Oversold to the masses as a panacea....Almost? in the mean time I am enjoying it driving for me immensely.
 
One interesting thing found in that report is that Tesla is one of the few companies that gives very complete data, due to the telemetry data being kept on the vehicle and sent to Tesla remotely.
To a casual reader, this sort of implies that Tesla provided the most complete data, which is certainly not true. If you include robotaxi companies (not auto manufacturers - sure they are probably not as complete) they have a lot more complete reporting than Tesla, as far as I can tell. That is one of the reasons the numbers you calculated end up with such a high percentage of Waymo/Cruise vehicles involved in accidents (in general I’d expect this number to be over 100% (the way it was calculated) eventually (and quite quickly), while going over 100% considerably more slowly for Tesla for obvious reasons).
 
  • Like
Reactions: gearchruncher
Fraud in this case must be in the eye of the beholder ( or the expectation of the beholder, or a lawyer of the beholder).. meanwhile I guess I have one of the only cars that actually works right, my FSDb sessions are nearly perfect ( better than my expectation actually), I don't flaunt them on you tube ( they'd be boring) easily Level 3 per the definition. So for me I got my moneys worth ( I paid 4k after having EAP for a couple of years, saleman said dont buy it yet it doesn't really do anything yet circa 2019 ) so far with hopefully more to come. \
as it is a very loosely worded " beta" ( more of an Alpha plus) and we've all known the Term FSD is a misnomer ( as was Autopilot). what I think should happen is sales, and the web site should have a " are you really sure?... multi entry button, that warns the buyer what they are getting again and again... therefore it is Buyer beware, but is certainly not fraud. Oversold to the masses as a panacea....Almost? in the mean time I am enjoying it driving for me immensely.
Funnily enough, the Tesla salesman told me not to buy FSD...I guess that dispels the theory that Tesla ‘marketing’ is defrauding people
 
There isn't a single company, that I'm aware of, deciding how much risk to take.
Sure there is. Tesla released FSD without any regulatory oversight, and continues to do so. Reporting crashes is not regulatory oversight on the calculation of risk. It's a way to determine after the fact if the company allowed too much risk on the road to decide if future oversight is needed.

How did they evaluate the acceptable risk?
What you described in here is how they MANAGE risk, not how they evaluated when the system had sufficently low risk to be released to public roads with public drivers. Was their internal threshold 1 fatality in 1000 miles or 10M miles? How did their testing acquire data to prove this before release?

the primary goal (prime directive if you will) is safety. Don't hit things and follow the traffic laws.
Except Speed limits and rolling through stops signs in Tesla's case, which they purposefully allowed.

How does their tool identify low risk testers?
Again, your whole description talked about what it does. What safety focused regulators care about is does it actually work? The hard numbers, the statistics. The things we don't get to see, and just like you said, we have zero idea if they make any difference.

All but the most fervent naysayers will admit that the FSD Beta program has improved over time.
With zero proof these improvements came because of data collected from public FSD users in the real world. Of course Tesla is developing FSD and making it better. But what percentage of those improvements were based on data taken from risking the uninformed public?

What benefits is society getting?
Cars that drive safer than a human. Less accidents. Less deaths. And possibly, less traffic.
You mean we MIGHT get these cars. And they MIGHT come from Tesla. But they might also come from a company that never tested on the public road. We absolutely do not have them now, and the current cars are higher risk than ones without the technology.

Or we can slow down progress and take our time getting there - perhaps we'll have FSD in 50 or so years. But mankind is on a road of discovery, and I don't think it's going to slow down.
The FAA is "slowing down" all sorts of development in aviation. The FDA slows down drugs all the time. Are you in support of getting rid of this and just letting companies do what they want with no oversight? What if Tesla suddenly says they are at L4, no driver is needed in the car. Are you OK with them just dropping 1000 of those cars on the road in your city without even telling you, or do maybe you hope there is actually someone, somewhere "slowing them down" to make sure they are actually ready before one runs over your kid on their way home from school?
 
Can you please explain how one might even argue that current FSD is safer than a human driver and why Tesla requires a high safety score to use it if it's safer? If it really made things safer, why isn't Tesla advertising this? Why aren't they giving it to everyone that has paid which is the ethical thing to do for safety enhancing technologies?
You seem to be conflating FSD Beta with the final FSD product (released from Beta). Of course the goal for FSD (including Autosteer on City Streets) is to be safer than a human driver. "Current FSD", which I assume you are referring to FSD Beta (Autosteer on City Streets) is in a testing phase. Beta testers are working with Tesla to help fine tune and expand the features of the software so that it will be safer than a human driver. The Beta requires constant vigilance while driving - many have commented how stressful using Beta is - to correct problems that occur during the drive, and report those problems to Tesla so the software can be adjusted. Other aspects of FSD Capabilities (including EAP) are also in beta, but don't have active public beta testers. Tesla is working on those internally and through passive public testers (collecting telemetry from cars as they use the features). There have been many articles published about how much telemetry data Tesla collects on their cars.

If your argument is going to be that FSD Beta is not currently safer than a human, and therefore shouldn't be allowed on the road, I'll refer you back to other comments about how technology has to eventually leave simulations and closed test tracks, and be put on real roads in order to progress.
 
  • Like
Reactions: drtimhill
To a casual reader, this sort of implies that Tesla provided the most complete data, which is certainly not true. If you include robotaxi companies (not auto manufacturers - sure they are probably not as complete) they have a lot more complete reporting than Tesla, as far as I can tell. That is one of the reasons the numbers you calculated end up with such a high percentage of Waymo/Cruise vehicles involved in accidents (in general I’d expect this number to be over 100% (the way it was calculated) eventually (and quite quickly), while going over 100% considerably more slowly for Tesla for obvious reasons).
There were two different reports - one for ADAS and one for ADS. The report I was referencing was the ADAS report, in which Tesla had more data than most other companies. ADS on the other hand, you are totally correct.
 
  • Like
Reactions: AlanSubie4Life
Yes we have those things in Europe and the apps that go with it...but the insurance company or the apps can’t control the car
Are you suggesting Tesla is going to start "controlling the car" based on driver monitoring? They don't do this in the USA either. What are they going to do, not allow you to drive anymore if they think you are not sufficiently safe? Limit your top speed to 10 under the speed limit? Only let you drive from 1:30pm to 2:47pm?
 
Are you suggesting Tesla is going to start "controlling the car" based on driver monitoring? They don't do this in the USA either. What are they going to do, not allow you to drive anymore if they think you are not sufficiently safe? Limit your top speed to 10 under the speed limit? Only let you drive from 1:30pm to 2:47pm?
Have you not heard of Tesla jail? I only meant that monitoring of Tesla is far more accurate