Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD is a fraud

This site may earn commission on affiliate links.
Well, the good thing about all this discussion is we will know soonish (now or years away) if this is all correct
Ahh, "Soonish" - the most applicable word to apply to FSD ever. It simultaneously sounds like it means something while having no content at all.
I'm just excited for progress, regardless.
I was too when I got my car in 2016 when FSD was first sold. Still waiting for something that looks anything like what was advertised. I still can't believe they are fighting through city streets autosteer instead of actually doing a useful L3 highway system.
 
Even if Tesla is somehow using disengagement data to to train some NN it seems like there's a lot of garbage being generated. People often don't disengage even when the car is driving erratically and many people use FSD Beta only where it "works well" creating the "bias" that you're concerned about.
To an extent that is an argument to continually expand the beta test pool (or even rotate it), since as you note the testers gradually learn the cars idiosyncrasies and mitigate them in advance, thus biasing the data. However, with a large enough pool you will get a cross-section of people who do let the car do crazy stuff. And in fact yes, the larger data sets and important for training .. to get really good results you need huge datasets, not only for training but also for testing.

The disengagements are less about gathering data to directly inform the training set, more about testing the trained NNs in as varied an environment as possible.
 
  • Like
Reactions: CyberGus
But even with that- the audit of processes is the audit of things like "did you do a safety review?" "To what standards did you base that review?" "Does your product break the law?" "Did you inform your testers they were being tested upon?" "You changed your hardware, are you sure the product doesn't shock people now?" "Did you do a Functional Hazard Analysis?". Yeah, it's just auditing "processes" but those processes are what prove the system is safe to release on the public.
How do you know Tesla is not doing things like this? You keep asserting Tesla is being irresponsible or unethical, but you have provided no real data to back that up. One very clear datum would be the car acting dangerously and causing major accidents, and that hasn't happened. So what is the basis for your claims?
 
  • Like
Reactions: CyberGus
How do you know Tesla is not doing things like this? You keep asserting Tesla is being irresponsible or unethical, but you have provided no real data to back that up. One very clear datum would be the car acting dangerously and causing major accidents, and that hasn't happened. So what is the basis for your claims?
I’m no NHTSA specialist, BUT…I’m guessing attempting to turn directly into the path OF AN ONCOMING TRAIN qualifies as “acting dangerously”.
 
I’m no NHTSA specialist, BUT…I’m guessing attempting to turn directly into the path OF AN ONCOMING TRAIN qualifies as “acting dangerously”.
You're right .. I saw a human do that only last week and get killed. CLEARLY we should stop ALL HUMAN DRIVERS AT ONCE. I hope you enjoy riding a horse to work. Oh ... wait .. horses can bolt! ... Maybe a bicycle?
 
Relevant
There are open questions of how many problems "more data" really do solve in this space
need-more-input-stephanie.jpg
 
Ahh, "Soonish" - the most applicable word to apply to FSD ever. It simultaneously sounds like it means something while having no content at all.

I was too when I got my car in 2016 when FSD was first sold. Still waiting for something that looks anything like what was advertised. I still can't believe they are fighting through city streets autosteer instead of actually doing a useful L3 highway system.
Are you really trying to argue about the word soonish?? Haha. I did put in parenthesis "now or years away" that quantifies (gives content to) my usage :).

Also, if it was easy (and profitable) to accomplish this, other manufacturers would have done so already at a consumer level, not commercial. Elon admitted it was tougher than he thought multiple times. Again, we'll see soonish :)
 
Wait a minute. I thought Full Self Driving was going to be better than the best human drivers, not equal to the worst human drivers.
Nope. You misunderstood.

The goal has been stated as "better than the average human driver by the end of the year [2022]" (this time - the goal has been a moving target 🤷‍♂️). That means better than the worst and worst than the better humans :).

Most people think they are above average drivers, and that may be the case most of the time. Those slight moments where they are tired, texting, talking, makeuping, eating, drinking, yelling (at children), grabbing (for something in the floorboard), daydreaming, spilling coffeeing, radioing, and speeding (#1 cause of accidents/deaths) are when FSD will ALWAYS be better.

I think people are still conflating robotaxi (L5) with "better than the average human...". I'm optimistic but still think we're 2+ years from full robotaxi with Tesla vehicles. Better than the average human by 2022... I'm cautiously optimistic :)
 
  • Like
Reactions: Bouba and Dewg
Are you really trying to argue about the word soonish??
No, but I am going to joke about it given how applicable to Tesla's autonomous driving it is.
Elon admitted it was tougher than he thought multiple times.
Conveniently, after he had taken literal hundreds of millions of dollars from consumers for it while offering no refunds for his mistake. What a great guy for "admitting" his "mistakes"!
 
To an extent that is an argument to continually expand the beta test pool (or even rotate it), since as you note the testers gradually learn the cars idiosyncrasies and mitigate them in advance, thus biasing the data. However, with a large enough pool you will get a cross-section of people who do let the car do crazy stuff. And in fact yes, the larger data sets and important for training .. to get really good results you need huge datasets, not only for training but also for testing.

The disengagements are less about gathering data to directly inform the training set, more about testing the trained NNs in as varied an environment as possible.
But they're generating so much disengagement data that there's no way they have the staff to analyze it. Even if they were able to analyze every disengagement, how would that help?
 
Dojo and something starting with GiGa
I've read that Dojo is in production. They expected to have it online by this year, as it was announced mid-2021. With the current massive global supply delays, especially with silicon chips, I'd expect the project to be delayed. There are major car manufacturers that are stopping production of vehicles because they can't get processors in enough quantity.
 
Speed limits are the one law FSD is specifically programmed to ignore (now that the NHTSA made it obey stop signs).
To be fair, since there is so much harping on Telsa and FSD, it's important to note that all cars with cruise control, adaptive cruise control, and ADAS systems can exceed speed limits at the drivers request. Even GM's new Super Cruise, and Hyundai's new Ioniq 5 Assisted Driving can go over the speed limit. I don't believe there is any regulation that stops car companies from letting their ADAS systems exceed speed limits.

The reason I say this is that some people might be coming to these forums and finding information that incorrectly implies that Tesla is doing something that other manufactures are not also doing.
 
Nope. You misunderstood.

The goal has been stated as "better than the average human driver by the end of the year [2022]" (this time - the goal has been a moving target 🤷‍♂️). That means better than the worst and worst than the better humans :).

Most people think they are above average drivers, and that may be the case most of the time. Those slight moments where they are tired, texting, talking, makeuping, eating, drinking, yelling (at children), grabbing (for something in the floorboard), daydreaming, spilling coffeeing, radioing, and speeding (#1 cause of accidents/deaths) are when FSD will ALWAYS be better.

I think people are still conflating robotaxi (L5) with "better than the average human...". I'm optimistic but still think we're 2+ years from full robotaxi with Tesla vehicles. Better than the average human by 2022... I'm cautiously optimistic :)

I’m afraid you are never going to convince me that it is A-OK for an autonomous automobile to occasionally get hit by a train. ;)
 
  • Like
Reactions: TessP100D and sjg98
I don't believe there is any regulation that stops car companies from letting their ADAS systems exceed speed limits.
That's because there are no regulations at all, and because L2 systems are not autonomous at all. Just like you said, "at the driver's request. Autonomous systems have no drivers.

My point here was that when someone says "speeding (#1 cause of accidents/deaths) are when FSD will ALWAYS be better." - Well, then what we have now is nowhere near to FSD because it does the one thing that supposedly is the highest cause of deaths. Which means it's not ALWAYS better, which means it's putting the public at unknown risk.

Anyone wishing for the L4 future should realize that as we move to actual L4, the car isn't going to be able to speed. Because real L4 means Tesla is liable for accidents and violations, not the "occupant", which means cops could pull over a Tesla going 1 MPH over and just collect money from Tesla all day long. Accident attorneys can sue Tesla every time an accident occurs and the car was going 1 MPH over. I mean, it is the #1 cause of accidents and deaths, right? How could Tesla ever allow their FULL SELF DRIVING system to do something so dangerous?
 
  • Like
Reactions: TessP100D