Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD is a fraud

This site may earn commission on affiliate links.
This is a poor analogy. In actuality, there are many experimental aircraft flying around that are not approved by the FAA. Any licensed pilot could build one and fly it without FAA type-certification. No 'safety score' is required - only a valid FAA pilots license suitable for that style of aircraft. Once constructed, the aircraft must be registered and inspected by a licensed mechanic, but does not need to be proven out by a special test pilot.

Yeah, I know all of this, given I have built and operate experimental aircraft and work in that industry.
You know what the FAA doesn't allow you to do with an experimental? Fly people or property for hire. They require big experimental signs on the aircraft so nobody that gets in it is unaware of the risk they are taking. The FAA highly limits what can be done with the aircraft in order to limit exposure.
You're also wrong- No licensed mechanic inspection is ever required.
Yes, the FAA doesn't really consider people on the ground. The reality is that aircraft crashes rarely hurt people on the ground,. But when cars crash, other users of the road are the most common victims.

So the FAA preventing you using experimental aircraft to risk the paying public is highly related to limiting testing to approved drivers when driving out in public.
 
First off, training an NN with qualified drivers in closed environments means it will only work in those closed environments, which is pointless. Ultimately, you have to give an NN real training data, which means real driving on real streets with uninformed drivers.
Thanks. I never said in closed environments. I said professional drivers. Like lots of other companies are doing.
If the product is not safe enough to give to all licensed drivers, then it should only be given to trained professionals for testing until it is ready for release.

And yes, you do test this way with medical devices. Sure, you do various tests way before they are used on actual sick patients, but ultimately all medical devices are tested on sick human patients. That's pretty much what Tesla have done .. internal testing, simulated testing etc .. and then a wider beta release.
Every single person you test new medicines on gives consent and is informed. This is like people getting Radioiodine - they actively tell you to stay away from other people because you can put them at risk, and they are not consenting or informed. Give me an example in medicine where we allow something where if the test on a consenting person goes wrong, someone non-consenting also gets hurt.

And, in actuality, how is this "unethical" testing going? Well, we've had tens of thousands of testers using FSD beta for 9-10 months now .. where are all these terrible accidents that doom-sayers were predicting? Sure, perhaps some (many?) have been avoided by the driver taking over, but that's how an L2 system is designed to work.
Oh, because it has worked out, it's ethical? That's not how it works.
Also, the other argument here is that they should just release it to everyone that paid. If it's working so well and is so low risk, why are they still limiting who gets it?
 
A driving license implies that you understand the responsibilities of controlling the car...basically, unless the wheels fall off, everything is the driver’s fault.
Only Tesla (as far as I’m aware) has a continued policy of monitoring a driver’s competency....perhaps other manufacturers will catch on?
 
A driving license implies that you understand the responsibilities of controlling the car...basically, unless the wheels fall off, everything is the driver’s fault.
Only Tesla (as far as I’m aware) has a continued policy of monitoring a driver’s competency....perhaps other manufacturers will catch on?
The argument is that controlling a car is different from monitoring an automation system that is controlling a car.
 
Every single person you test new medicines on gives consent and is informed. This is like people getting Radioiodine - they actively tell you to stay away from other people because you can put them at risk, and they are not consenting or informed. Give me an example in medicine where we allow something where if the test on a consenting person goes wrong, someone non-consenting also gets hurt.
Covid vaccines, perhaps?
 
Thanks. I never said in closed environments. I said professional drivers. Like lots of other companies are doing.
My point remains, you will end up with a biased NN. And how do you know they did NOT already do closed testing with professional drivers? Do you have access to the internal testing schedule within Tesla for the past few years? In fact, it's pretty clear from the few things that leaked out that they were testing FSD beta for a considerable time in-house before they opened the testing to anyone.

So really what you are saying is you think FSD beta in its current form isnt ready for this more open phase of testing. Upon what do you base this assertion? Certainly not the safety of the car, since, as I've already noted, the car is doing rather well (or more, precisely, the combination of car and beta test driver).
 
we've had tens of thousands of testers using FSD beta for 9-10 months now .. where are all these terrible accidents that doom-sayers were predicting? Sure, perhaps some (many?) have been avoided by the driver taking over, but that's how an L2 system is designed to work.

I was one of those that thought things would be worse than they have proven to be (so far). I will say that I am indeed surprised by the limited accidents that have occurred so far during the Beta program.

However, I would posit that the lack of accidents is more a function of how poorly the FSD Beta performs than anything else. Most (biased) online polls here seem to show a disengagement rate of around one every two to five miles. Even at only 30 mph, that is an intervention every four to ten minutes. If your car is going to hit something or steer into traffic or whatnot every four minutes, you will pay very close attention.

I think the danger will come when FSD Beta becomes more competent. If you only have one intervention every 30 to 60 minutes it becomes much more difficult to keep up that level of concentration.
 
However, I would posit that the lack of accidents is more a function of how poorly the FSD Beta performs than anything else. Most (biased) online polls here seem to show a disengagement rate of around one every two to five miles. Even at only 30 mph, that is an intervention every four to ten minutes. If your car is going to hit something or steer into traffic or whatnot every four minutes, you will pay very close attention.

I think the danger will come when FSD Beta becomes more competent. If you only have one intervention every 30 to 60 minutes it becomes much more difficult to keep up that level of concentration.
Yep, agree totally .. there is a "danger valley" between when the car is so unpredictable that drivers pay attention (where we are now) and when the car is safe enough that the driver can be less attentive. They middle ground worries me .. when the driver thinks they can start relaxing and then POW, the car does something bad.
 
Between the hundreds of people laid off in the Autopilot department 2 weeks ago/that office shutting down, and now this..



Not sure the progress will INCREASE at a fast pace anytime soon...
 
As a qualified driver you are allowed to teach a learner driver (without a dual controlled car). Hands off driving is implied in a license
That's quite a stretch. Self-driving cars are nothing like humans. You're not even training them, you're testing them.
My point remains, you will end up with a biased NN. And how do you know they did NOT already do closed testing with professional drivers? Do you have access to the internal testing schedule within Tesla for the past few years? In fact, it's pretty clear from the few things that leaked out that they were testing FSD beta for a considerable time in-house before they opened the testing to anyone.

So really what you are saying is you think FSD beta in its current form isnt ready for this more open phase of testing. Upon what do you base this assertion? Certainly not the safety of the car, since, as I've already noted, the car is doing rather well (or more, precisely, the combination of car and beta test driver).
Has anyone ever been able to explain why having such a large test fleet is beneficial? Obviously it helps you find bugs faster but it's not like they're searching for the last few bugs. You can collect all the data to train the perception neural nets (the only neural nets that Tesla has confirmed they're using) without FSD Beta enabled. Has the rate of progress increased since they expanded from 2k testers to 100k testers? Even if Tesla is somehow using disengagement data to to train some NN it seems like there's a lot of garbage being generated. People often don't disengage even when the car is driving erratically and many people use FSD Beta only where it "works well" creating the "bias" that you're concerned about.

I see people asserting that the number of testers is a huge advantage but I haven't ever seen Tesla say that.
 
That's quite a stretch. Self-driving cars are nothing like humans. You're not even training them, you're testing them.

Has anyone ever been able to explain why having such a large test fleet is beneficial? Obviously it helps you find bugs faster but it's not like they're searching for the last few bugs. You can collect all the data to train the perception neural nets (the only neural nets that Tesla has confirmed they're using) without FSD Beta enabled. Has the rate of progress increased since they expanded from 2k testers to 100k testers? Even if Tesla is somehow using disengagement data to to train some NN it seems like there's a lot of garbage being generated. People often don't disengage even when the car is driving erratically and many people use FSD Beta only where it "works well" creating the "bias" that you're concerned about.

I see people asserting that the number of testers is a huge advantage but I haven't ever seen Tesla say that.
You can’t test human drivers without qualifications, you are supervising them
 
I think the danger will come when FSD Beta becomes more competent. If you only have one intervention every 30 to 60 minutes it becomes much more difficult to keep up that level of concentration.
I know, right? Had the same problem with one of our kids navigating life, all smooth, steady as she goes, start relaxing, and then bam, right off the road into the postmodern metaphysical abyss.......
 
Last edited:
  • Funny
Reactions: Dewg
That's quite a stretch. Self-driving cars are nothing like humans. You're not even training them, you're testing them.

Has anyone ever been able to explain why having such a large test fleet is beneficial? Obviously it helps you find bugs faster but it's not like they're searching for the last few bugs. You can collect all the data to train the perception neural nets (the only neural nets that Tesla has confirmed they're using) without FSD Beta enabled. Has the rate of progress increased since they expanded from 2k testers to 100k testers? Even if Tesla is somehow using disengagement data to to train some NN it seems like there's a lot of garbage being generated. People often don't disengage even when the car is driving erratically and many people use FSD Beta only where it "works well" creating the "bias" that you're concerned about.

I see people asserting that the number of testers is a huge advantage but I haven't ever seen Tesla say that.
Right and it's not like the defects we are seeing videos of are bizarre edge cases you find after the millionth mile...

I am more and more convinced the car is sensor suite & compute limited, beyond just the NN learning rate.

I find it funny that Tesla is moving in the direction of less sensors (1 -> 0 radar) while some competitors now have 5 radar, a bunch of cameras, ultrasonics.. not to mention some trying LIDAR.

It just doesn't seem like a recipe for Tesla success so much as a cost cut move.

Luxury (and other) cars have had front & rear cross traffic alerts, blind spot warnings, etc using multi radar for years while Tesla we are supposed to trust they divine this purely from cameras.. when observed reality in the field shows many gaps.
 
  • Like
Reactions: 2101Guy
Like many have said- people are not testing or training FSD by using it.

Having FSD released is based on marketing. Tesla is so behind on their FSD promises that they HAVE to have something out in public. It's a crappy L2 system with tons of holes that don't require millions or real world miles to identify. Tesla knows exactly where it fails and where it's limited. The reason it's "released" into "limited beta" is that it allows Tesla to say they are "shipping" it.

If Tesla really wants their argument to be that they need all these people driving it to make it better, they need to start exposing how they are collecting data from these cars, how it is analyzed, and why this data cannot be collected from cars passively when they are not running FSD closed loop and increasing risk.
 
Only Tesla (as far as I’m aware) has a continued policy of monitoring a driver’s competency....perhaps other manufacturers will catch on?
Tesla's definition of "competency" is weirdly highly tied to how well you would drive like an autopilot.
But oddly they don't check if you are speeding, Or if you are driving in the wrong lane. Or running red lights/stop signs. All things they can easily measure, but don't.
Tesla's monitoring has nothing to do with driver competency, and they only apply this measurement to their access to FSD, not "in general." So what do you want other manufacturers to do? Do you want your car not to start if the manufacturer decides with their unaudited algorithms that you are not competent? Or just not allow you to turn on the heated seats as a punishment?