Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD is a fraud

This site may earn commission on affiliate links.
and it would require NDA to get more specific, which I will not.

...

wish those who dont know would just keep quiet. bad info is worse than no info.
I don't disagree with any particular point of yours, but I find this logic unnecessarily gatekeepy. Nobody has full knowledge of the inner workings of every self driving tech leader. Dismissing everybody else in a public discussion for not having non-public information isn't helpful.
 
I feel criticized and corrected, but I don't think I said anything wrong. They might have alien tech, I don't care. I made it very clear what I care. I care that my car gets better over time. Does ME deliver that? I don't think so - not yet. So when shopping for a car, because L4/L5 does not exist, if I buy a ME car, I'll have to buy another one in the future if I want an AV. With a Tesla, it will get better over time. Maybe to the point of autonomy. If Tesla fails, I'll have to buy another car if I want it. So no loss. If they succeed, yay me. If autonomy isn't reached in my lifetime, at least I have a car that farts.
Can't argue with being pragmatic when using a problem solvers mindset. While you might have made the decision with you're eyes wide open other might have had their eyes wide shut. All that glitters is not gold. In any case, I'm not going to state caveat emptor and leave the conversation there, so I will quote Carl Sagan:

"I have a foreboding of an America in my children's or grandchildren's time — when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness."
 
Last edited:
  • Like
Reactions: Krash
Can't argue with being pragmatic when using a problem solvers mindset. While you might have made the decision with you're eyes wide open other might have had the eyes wide shut. All that glitters is not gold. In any case, I'm not going to state caveat emptor and leave the conversation there, so I will quote Carl Sagan:

"I have a foreboding of an America in my children's or grandchildren's time — when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness."
I never knew the concept of caveat emptor, but dang it, I use my time to make my money. Of course there are things out of our control, namely Google vs Apple on the mobile industry, I'm not fond of either but I do have a preference. Some things are a necessary evil, I get it, but even then I still try to get the best out of my decisions. If I'm being honest, you threw me for a loop here. In a good way. One of those messages I'd need time to digest and process.
 
ME is a behind-the-scenes company. if they get an OTA update from the host (the car) then its the same as what you are used to via tesla's ota updates.

and anything that is this complex will have OTA updates. eventually all cars will have OTA updates, since it makes so much sense to be able to push things to customers without a 'truck roll' so to speak ;)

..it also gives them the ability to 'pull' (read) info from the car, remotely, and they all LOVE datamining. this is the future for all cars. and governments will want part of that data stream, so yet another reason we'll see more of OTA updates from vendors, not less.

what ME does for you is that it helps bring the whole field further along and its going to take a humanity-driven set of data and algorithms to make this really work. the more collaborators, even if competitors, the better. people change jobs all the time and the info 'dissipates' to the various companies. in a few years, the shared info will find its way into 'software chips' and then later, parts will be converted to silicon and given a more pure hardware assist (like all maturing technologies). at that point, the proprietary offerings will start to EOL themselves and vendors will want to make use of the more standard and faster-to-market (and proven safety levels) of the industry chips. when the chips become somewhat commodity, those who are still stuck back with their proprietary stacks will have a bigger job to do, in catch-up. its one of the costs of being first; you wont be 'the ISO standard' and so the real world standard will be vendor neutral and all vendors who want to claim conformance will have to meet the specs. its how standards bodies work and it works reasonably well, in fact.

eventually, the tech behind 'self driving' will be names of vendors, likely ECU vendors, and they will simply be compute and control modules that are integrated by 'dumb' vendors like ford, chevy, etc. they dont need to know a lot about what goes on inside the ECU, they are given an interface spec, some sample reference code and they take it from there.

we WANT self-driving to be like this, eventually. its a huge mess if everyone does their own thing and keeps 'testing' it on live roads. the sooner we get past the alpha/beta phase, for everyone, the better.
 
  • Like
Reactions: helvio
It needs to at least be able to handle all the situations you would encounter in a city in even a bare minimum capacity, but it's obviously not there yet (unprotected left turn is an obvious one pointed out, was just discussing in other thread that the car still can't read digital speed limit signs as another example). Doesn't mean it's dangerous however. If it was dangerous, we would have heard about an accident already.
Why does it need to handle all of them? Highway AP couldn't handle more than going straight for years, and Tesla released that. It still can't handle some common situations, like digital speed limit signs, or lane splits. Tesla's willingness to have Highway AP out in a far from complete fashion is a strong contradiction to the idea that city streets somehow needs to be complete before more than 71 customers can get it.

It's also very, very far away from being complete in "all the situations you would encounter in a city." It made no progress in the areas you mentioned in the last 10 months. So it appears you are someone that believes we are years away from a broader release, not just a few months like a lot of other people?

Doesn't mean it's dangerous however. If it was dangerous, we would have heard about an accident already.
The average accident by Tesla's own numbers occurs every 2.05M miles. There are 71 public beta testers that have had v9 for one week. That would be ~28,000 miles on AP each before you expect an accident. In such a small volume, it does not need to have caused an accident to be unacceptably safe.

Plus, the thing is that you say it can't handle an unprotected left. It can. It sees it, and it does it. No video shows it being completely unable to do the left. It just can't do it without hitting another car. That's the definition of having the feature, but it being unacceptably safe! If they released it today, it would try these lefts and hurt people, not just not do them and be unable to navigate to a destination. Your very definition of it "not doing something" is based on it not doing it reliability enough to not hurt people.
 
  • Informative
Reactions: BrerBear
I never knew the concept of caveat emptor, but dang it, I use my time to make my money. Of course there are things out of our control, namely Google vs Apple on the mobile industry, I'm not fond of either but I do have a preference. Some things are a necessary evil, I get it, but even then I still try to get the best out of my decisions. If I'm being honest, you threw me for a loop here. In a good way. One of those messages I'd need time to digest and process.
I think for self-driving capability versus actual level of automation it's the notion of truthlikeness (or in philosophy, verisimilitude).
 
"Fraud" requires malicious intent. Neither "irrational exuberance" nor even incompetence rise to the level of "fraud".

The first sentence is not true. Fraud means knowingly stating falsehoods. I think it's fair to say that Tesla routinely commits fraud. I think Tesla knows that it cannot deliver the FSD feature set that it promised by "later this year" at a level that would be acceptable to consumers, so FSD is probably fraud. I also think it was fraud for Tesla to say that the Model S/X refresh would be delivered in February/March; there is no way they could have reasonably believed that.
 
The first sentence is not true. Fraud means knowingly stating falsehoods. I think it's fair to say that Tesla routinely commits fraud. I think Tesla knows that it cannot deliver the FSD feature set that it promised by "later this year" at a level that would be acceptable to consumers, so FSD is probably fraud. I also think it was fraud for Tesla to say that the Model S/X refresh would be delivered in February/March; there is no way they could have reasonably believed that.

I'm disturbed you believe "knowingly stating falsehoods" is not malicious.
 
The first sentence is not true. Fraud means knowingly stating falsehoods. I think it's fair to say that Tesla routinely commits fraud. I think Tesla knows that it cannot deliver the FSD feature set that it promised by "later this year" at a level that would be acceptable to consumers, so FSD is probably fraud. I also think it was fraud for Tesla to say that the Model S/X refresh would be delivered in February/March; there is no way they could have reasonably believed that.

I don’t think that’s true in strict legal vernacular. In any case, I also don't think one would win in civil court (the State would need to bring criminal charges, and I doubt they would). One would first have to prove intent and that they incurred damages (actual loss). There is a bit of buyer beware (caveat emptor) here, and it would come down to what was contractually and verbally stated, and what was acceptable at the time of purchase, (was anything hidden; were promises made?). If Tesla knowingly made exaggerated claims in order to deceive, (that would need to be proven) one might have a case but what would they win? And what will it cost if they lose (one could be entitled to damages beyond actual loss, (i.e. legal fees, lost use, etc.))? However, most, if not all, probably will have signed over their standing to sue and must contractually enter into arbitration.
 
  • Like
Reactions: CyberGus
I don’t think that’s true in strict legal vernacular. In any case, I also don't think one would win in civil court (the State would need to bring criminal charges, and I doubt they would). One would first have to prove intent and that they incurred damages (actual loss). There is a bit of buyer beware (caveat emptor) here, and it would come down to what was contractually and verbally stated, and what was acceptable at the time of purchase, (was anything hidden; were promises made?). If Tesla knowingly made exaggerated claims in order to deceive, (that would need to be proven) one might have a case but what would they win? And what will it cost if they lose (one could be entitled to damages beyond actual loss, (i.e. legal fees, lost use, etc.))? However, most, if not all, probably will have signed over their standing to sue and must contractually enter into arbitration.
The purchase contract specifically states (at least, did a few years ago) that oral or even written representations by a Tesla employee not authorized to do so are not binding on the company.
 
Why does it need to handle all of them? Highway AP couldn't handle more than going straight for years, and Tesla released that. It still can't handle some common situations, like digital speed limit signs, or lane splits. Tesla's willingness to have Highway AP out in a far from complete fashion is a strong contradiction to the idea that city streets somehow needs to be complete before more than 71 customers can get it.

It's also very, very far away from being complete in "all the situations you would encounter in a city." It made no progress in the areas you mentioned in the last 10 months. So it appears you are someone that believes we are years away from a broader release, not just a few months like a lot of other people?
V9 has made significant progress in the areas mentioned. Will respond to point about left turns below, and Elon recently tweeted they are working on supporting digital signs.
The average accident by Tesla's own numbers occurs every 2.05M miles. There are 71 public beta testers that have had v9 for one week. That would be ~28,000 miles on AP each before you expect an accident. In such a small volume, it does not need to have caused an accident to be unacceptably safe.
You are being hugely disingenuous by leaving out the other 2000 testers (a 29x factor), even if they may be employees, and only counting v9 (FSD Beta had been out since last year, expansion to 2000 happened in march). So just from when the March 13 confirmation of 2000 testers, that's 18 weeks. 18weeks*400mi/week*2000 testers = 14.4M miles. We should have had at least 7 accidents in that time period, even without counting earlier testing.

Plus, the thing is that you say it can't handle an unprotected left. It can. It sees it, and it does it. No video shows it being completely unable to do the left. It just can't do it without hitting another car. That's the definition of having the feature, but it being unacceptably safe! If they released it today, it would try these lefts and hurt people, not just not do them and be unable to navigate to a destination. Your very definition of it "not doing something" is based on it not doing it reliability enough to not hurt people.
In V8.2 it was extremely hesitant even with a large gap in the road and goes too late (at least two times in the video it was like this). If it crashed in an attempt at a left turn, it hasn't completed the left turn. It only successfully completed 3/8 tries. If it can't even finish the turn more than half the time, I don't consider that ready for general public consumption, as it'll be an annoyance, not a useful feature (regardless of safety).
In V9.0 it has much more confidence and goes for the big gaps. 4-5/8 tries this time. Still not good enough (just barely at half or over), but it's an improvement.
 
Last edited:
  • Like
Reactions: WhiteWi
You are being hugely disingenuous by leaving out the other 2000 testers (a 29x factor), even if they may be employees, and only counting v9 (FSD Beta had been out since last year, expansion to 2000 happened in march). So just from when the March 13 confirmation of 2000 testers, that's 18 weeks. 18weeks*400mi/week*2000 testers = 14.4M miles. We should have had at least 7 accidents in that time period, even without counting earlier testing.
Why do you assume that no crashes have occurred in the 2,000 car population? For all you know, the Tesla employee agreement says that if an accident occurs while using "FSD", you do not tell anyone this. It doesn't need to legally be disclosed since Tesla is adamant this is an L2 system. How would this make it to the public that it happened?

400 miles a week using City Streets Autosteer is an insane amount. Average city speeds are under 25 MPH, so this is 16 hours a week with CSA engaged. Highway AP doesn't count, manual driving doesn't count. Don't these people have jobs at Tesla to be at?

Finally, testing under v8 is irrelevant for v9, if it's true that it's a fundamental, complete rewrite. No data from v8 applies to v9 if everything is new. Even at 2,000 people at 400 miles a week that's only 800k miles.

Still not good enough (just barely at half or over), but it's an improvement.
Yes, better. However, not safe. The reason it's not working is not that it can't turn left, it's that it hurts you when it does it. I still don't see any argument that the system is safe enough to release, but is missing features that makes it pointless for Tesla to release. Again, if it is acceptably safe in the current release, why is Tesla not releasing it? It's good enough that Tesla is happy to have hundreds of videos of it posted, and is happy for it to be marketed. If it's good enough for that, why is it not good enough for Tesla owners that paid $10K for it to get it?
 
  • Disagree
Reactions: WhiteWi
Why do you assume that no crashes have occurred in the 2,000 car population? For all you know, the Tesla employee agreement says that if an accident occurs while using "FSD", you do not tell anyone this. It doesn't need to legally be disclosed since Tesla is adamant this is an L2 system. How would this make it to the public that it happened?

A Tesla can't even fart without someone making a YouTube video of it
 
A Tesla can't even fart without someone making a YouTube video of it
Show me a list of every Tesla that has had an accident in the last month.

There are 1.5M Teslas in the world. Tesla's own numbers say an accident every 1:2M miles. At 12K miles a year each, that's 50M miles daily. Statistically, 25 Teslas get into an accident serious enough to deploy a passive restraint. 99.9% of these are not newsworthy. It's only newsworthy if it happens when a Tesla fails, instead of a human. This is one of the reasons that Tesla is fighting so hard to keep out of any kind of reporting requirements. As long as they can blame the human, it's not newsworthy, because humans fail all the time.

If @stopcrazypp is right, and v8 has done 14M miles without an accident, then this is a remarkable achievement that Tesla should be publicizing. CSA is already 7X as safe as a human driving. It's 3X as safe as current AP is, in a much more demanding and statistically dangerous environment. Tesla would in fact have ethical considerations in NOT releasing it, given how it would save lives, and how safety is their top priority.

Or, you know, it has nowhere near 14M miles on it, or it does have a lot of crashes.
 
Last edited:
  • Disagree
Reactions: WhiteWi
Why do you assume that no crashes have occurred in the 2,000 car population? For all you know, the Tesla employee agreement says that if an accident occurs while using "FSD", you do not tell anyone this. It doesn't need to legally be disclosed since Tesla is adamant this is an L2 system. How would this make it to the public that it happened?
While not every accident will be disclosed, judging by how common it is for the media to report AP accidents, if 7 accidents have happened with FSD Beta, I find it extremely unlikely that not even a single one have been reported by the media.
400 miles a week using City Streets Autosteer is an insane amount. Average city speeds are under 25 MPH, so this is 16 hours a week with CSA engaged. Highway AP doesn't count, manual driving doesn't count. Don't these people have jobs at Tesla to be at?
You can bring down the assumption, but I'm just using the same figures you did.
Finally, testing under v8 is irrelevant for v9, if it's true that it's a fundamental, complete rewrite. No data from v8 applies to v9 if everything is new. Even at 2,000 people at 400 miles a week that's only 800k miles.
That's not true from my understanding. Karpathy's talk says they run regression tests (and add new ones all the time) and each release must at minimum not regress in their test suite before it is pushed out to the beta testers. They aren't starting from scratch.
Yes, better. However, not safe. The reason it's not working is not that it can't turn left, it's that it hurts you when it does it. I still don't see any argument that the system is safe enough to release, but is missing features that makes it pointless for Tesla to release. Again, if it is acceptably safe in the current release, why is Tesla not releasing it? It's good enough that Tesla is happy to have hundreds of videos of it posted, and is happy for it to be marketed. If it's good enough for that, why is it not good enough for Tesla owners that paid $10K for it to get it?
I already pointed it out, if it needs to have the driver take over half the time or more than half the time, why would it make sense for Tesla to release it regardless of safety? For it to be a useful feature, it needs to provide some benefit. Tesla perhaps doesn't need to address every corner case first, but it needs to be able to have at least basic competency in fairly common tasks (like unprotected left turns) to be a useful feature.
 
Last edited:
I already pointed it out, if it needs to have the driver take over half the time or more than half the time, why would it make sense for Tesla to release it regardless of safety? For it to be a useful feature, it needs to provide some benefit.
The issue is that this is completely different than Tesla's other features. Smart summon is clearly a party trick, that fails more than half the time. Yet Tesla released it, and continues to advertise it. Auto lane change is worthless for many people on any kind of dense highway. When Tesla first released AP, it couldn't even do the speed limit.

Why is CSA so different than the other features they have released, especially one that is so important to the company in terms of marketing?

You can bring down the assumption, but I'm just using the same figures you did.
No, you added 400 miles per week of autosteer which is nuts. That's 20,800 miles of driving on CSA a year, when the average car in the USA does 12,500 total.

I find it extremely unlikely that even a single one would have been reported by the media.
How would the media know it happened when CSA beta was engaged? There are 25 Tesla accidents a day in the USA. These employees would be bound by NDA to not tell anyone, and it's impossible from the outside to know CSA was engaged.

That's not true from my understanding. Karpathy's talk says they run regression tests (and add new ones all the time) and each release must at minimum not regress in their test suite before it is pushed out to the beta testers. They aren't starting from scratch.
I know they didn't. Just poking fun at those that say v9 is a complete rewrite.
However, there is no way traditional regression testing works here.
If their test suite can show that the removal of radar didn't cause any regressions, then it's an amazing simulation. It recreates video and radar so accurately that it makes you wonder why they need real world tests at all. Simulation that Tesla has traditionally argued they don't need or want. And simulation that is so accurate it could be used to train the NN directly, not just test for regressions.

It also means their sim can create left turn scenarios, and see that v9 works as well as v8. But it still fails 50% of the time. But they still decided to release... But they have a sim that is so accurate, that it could have be used to train the system...

Regression tests are great for very controlled scenarios, or code with simple stimulus/response. There is no way Tesla has sufficient coverage of autopilot behaviors to claim it is as safe as before.

Everything still points to the idea that Tesla knows this isn't safe enough (or has no idea at all, which is the same thing) to be in a broad set of people's hands, not that it is plenty safe, just not useful. Useful has not been a strong deciding factor for the company that adds fart noises to their cars ;)
 
  • Disagree
Reactions: WhiteWi
Replace 'Alpha Centauri' with 'Heaven' from any religious narrative. See the problem with this? Intent matters. A sincere belief in a claim may be at cross-purposes with the data, may not be testable, may violate the laws of logic, but fraud it is not. Intent matters.

Fraudsters employ intentional deception to manipulate one or more subjects into a belief that benefits the fraudster's motives. This could be the faith healer, con man, 'psychic', quack doctor practicing some kind of sham medicine, or someone selling two tickets to paradise in the afterlife.

The person who sincerely believes in something false or unavailable to peer-review or independent verification is not a fraudster—even if they're factually wrong. Their intent may be good and even altruistic from their sincere perspective. The fraudster knows they're defrauding people. The sincere believer is not the same as the 'faith healer' who uses myriad parlor tricks to fool audiences into believing their 'miracles' for an obvious profit motive. One may share their sincere belief (not fraud) but the other is committing knowing deception for personal gain...fraud.

Of course, this is a far cry from FSD, which is the named 'goal' of an iterative technological convergent evolution which will inevitably lead to FSD at some point in the future—but it will happen. Safe and reasonable self-driving is possible now in most conditions barring edge cases, but without fixing those edge cases Tesla cannot really release FSD, nor would it be safe without quick and decisive intervention where necessary. This is why beta-testing is so important. We are simply at a point in this evolution where FSD is not yet fully-realized...like the early days of rocketry before rockets were somewhat reliable enough to trust with precious human cargo. Edison tried rat hair as a light bulb filament, supposedly, long before he got to copper wire. :) In other words, iterative evidence-based technology cannot be compared to fraud simply because it's not fully-realized at the moment.

That said, I think fraud requires knowing deception and an intent to defraud.
Well said!
 
Replace 'Alpha Centauri' with 'Heaven' from any religious narrative. See the problem with this? Intent matters. A sincere belief in a claim may be at cross-purposes with the data, may not be testable, may violate the laws of logic, but fraud it is not. Intent matters.

Fraudsters employ intentional deception to manipulate one or more subjects into a belief that benefits the fraudster's motives. This could be the faith healer, con man, 'psychic', quack doctor practicing some kind of sham medicine, or someone selling two tickets to paradise in the afterlife.

The person who sincerely believes in something false or unavailable to peer-review or independent verification is not a fraudster—even if they're factually wrong. Their intent may be good and even altruistic from their sincere perspective. The fraudster knows they're defrauding people. The sincere believer is not the same as the 'faith healer' who uses myriad parlor tricks to fool audiences into believing their 'miracles' for an obvious profit motive. One may share their sincere belief (not fraud) but the other is committing knowing deception for personal gain...fraud.

Of course, this is a far cry from FSD, which is the named 'goal' of an iterative technological convergent evolution which will inevitably lead to FSD at some point in the future—but it will happen. Safe and reasonable self-driving is possible now in most conditions barring edge cases, but without fixing those edge cases Tesla cannot really release FSD, nor would it be safe without quick and decisive intervention where necessary. This is why beta-testing is so important. We are simply at a point in this evolution where FSD is not yet fully-realized...like the early days of rocketry before rockets were somewhat reliable enough to trust with precious human cargo. Edison tried rat hair as a light bulb filament, supposedly, long before he got to copper wire. :) In other words, iterative evidence-based technology cannot be compared to fraud simply because it's not fully-realized at the moment.

That said, I think fraud requires knowing deception and an intent to defraud.


In most cases fraud is committed not by making claims but hiding them.

Replace 'Alpha Centauri' with 'Heaven' from any religious narrative. See the problem with this? Intent matters. A sincere belief in a claim may be at cross-purposes with the data, may not be testable, may violate the laws of logic, but fraud it is not. Intent matters.

Fraudsters employ intentional deception to manipulate one or more subjects into a belief that benefits the fraudster's motives. This could be the faith healer, con man, 'psychic', quack doctor practicing some kind of sham medicine, or someone selling two tickets to paradise in the afterlife.

The person who sincerely believes in something false or unavailable to peer-review or independent verification is not a fraudster—even if they're factually wrong. Their intent may be good and even altruistic from their sincere perspective. The fraudster knows they're defrauding people. The sincere believer is not the same as the 'faith healer' who uses myriad parlor tricks to fool audiences into believing their 'miracles' for an obvious profit motive. One may share their sincere belief (not fraud) but the other is committing knowing deception for personal gain...fraud.

Of course, this is a far cry from FSD, which is the named 'goal' of an iterative technological convergent evolution which will inevitably lead to FSD at some point in the future—but it will happen. Safe and reasonable self-driving is possible now in most conditions barring edge cases, but without fixing those edge cases Tesla cannot really release FSD, nor would it be safe without quick and decisive intervention where necessary. This is why beta-testing is so important. We are simply at a point in this evolution where FSD is not yet fully-realized...like the early days of rocketry before rockets were somewhat reliable enough to trust with precious human cargo. Edison tried rat hair as a light bulb filament, supposedly, long before he got to copper wire. :) In other words, iterative evidence-based technology cannot be compared to fraud simply because it's not fully-realized at the moment.

That said, I think fraud requires knowing deception and an intent to defraud.

I will argue you are correct.

Fraud is one of those terms that is suggested anytime a buyer feels a product does not meet their expectations. In this case FSD has been misrepresented to some degree (was it ever claimed to be level 5?)—but that’s not necessarily fraud. One thing that matters is what’s ‘written’, (e.g. contract agreement, disclaimer of reliance provisions).

However, as a form of deception fraud is often what is NOT disclosed, not what’s been embellished. To determine if fraud has been committed materiality must be established. Part of this is determining the defendant’s intent and how the plaintiff acted in reliance of that information. For both the plaintiff and defendant this is often referred to as ones state of mind, which is part of intent. I think in the case of FSD disproving fraud may be much simpler, and comes right out of the skepticism written within some of the response to this thread.

There’s is the always assumed risk with early adoption. Also, the technology itself is one that requires iteration and more data (time) to develop to higher levels of fidelity. One’s knowledge, or lack thereof given its availability, will be questioned. Given the press, what one knows about AI in general, and its current level of use in everyday life, and current fidelity of autonomous systems (level of trust) should be part of the ones decision and ones “state of mind”. One cannot simply state ignorance, and use gullibility as a defense. It’s not just what one knows but what one should have known. If one is willing to put trust, in what at this point can reasonably be assumed is a novel technology, one must take reasonable steps to understand what they are buying (i.e. due diligence). It is not fraud to sell tickets for a trip to Alpha Centauri, since man has only made it to the moon.

I’m not saying I entirely agree with this. It’s why I posted the Sagan quote (I’ll repost it below). But software companies do this all the time and we just accepted it. It would be difficult to now claim we don’t know software doesn’t always work as intended and needs to be patched. One may say FSD is so far out of ones level of expectation that it is totally nonfunctional. Nevertheless one’s expectation alone does not raise to the level of fraud. It might entitle one to a refund. However, there’s those pesky disclaimer of reliance provisions, and arbitration clauses and class action waivers.

"I have a foreboding of an America in my children's or grandchildren's time — when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness." ~ Carl Sagan

PS: One must consider FSD as an innovation, it is both revolutionary and evolutionary. Full autonomous driving will be a fundamental shift in approach. It does not exist, If one has traveled on any road or highway they know this. If not ... ?