Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

NHTSA Close to forcing recall of FSD after more crashes

This site may earn commission on affiliate links.
What you're describing would 100% result in the technology being totally disabled and likely some other legal action taken against Tesla, that's not what the NHTSA wants here.

It should be noted that this original investigation into Autopilot crashing into first responders was likely a reaction to Tesla being sued by those first responders -- there are a few of these lawsuits in the system right now from police officers etc seeking compensation due to damages/injuries sustained in these crashes where drivers were not paying attention and Autopilot was active.

It's not "punishment" for Tesla, this is legitimately what the safety regulator's job is: to put a leash on the idiots and reasonably protect the broader public.
One of my maxims in IT is: "I can't fix stupid". I appreciate your use of the word reasonable. Until we get to L4 and L5, there will be accidents by stupid drivers.
 
I don't have FSD, and I don't even use the basic AP that much because of over-attentiveness - I tend to exert too much control on the steering and AP cancels. So, I'm not really fully confident of technology's ability to take full control.

Taking that into account, I wonder about why people don't disable AP as soon as they see flashing emergency lights ahead. I would do that, and slow down, until I got close enough to see what's going on. Also, it's always possible some person might be signalling to me to stop or detour, or to warn me about something ahead. I can't expect AP to figure all this out. So couldn't the FSD detect emergency lights ahead, signal the driver, etc?

But we also need to compare the accidents caused by rubberneckers, etc without any automation who get in accidents. It's pretty hard to legislate driver stupidity.

However, on the other side, I think Tesla is at fault for their (Elon's) marketing of the abilities, making it sound like it has more capabilities than it really does. Just the nomenclature, like "full self-drive" which isn't, and "auto-pilot" have fooled some fools. With lawyers controlling language used, many people think the warnings are just CYA things to ignore, like so many other warnings. Perhaps the NHTSA should force Tesla to do an all-out campaign to educate people and try to undo their past overly-optimistic suggestions. Or maybe the DMV should have a special test for drivers of Teslas, etc, similar to motorcycle licenses.
 
  • Like
Reactions: drtimhill and Dewg
…However, on the other side, I think Tesla is at fault for their (Elon's) marketing of the abilities, making it sound like it has more capabilities than it really does. Just the nomenclature, like "full self-drive" which isn't, and "auto-pilot" have fooled some..
And herein is a big portion of the problem…

If the actual tweets and statements by Elon and Tesla matched what is worded in the fine print warnings that comes on the screen when you switch on the capability/FSD functionality on the car? I think there would be fewer issues. But as it stands, between the name “FULL SELF DRIVING” and the “it’s gonna drive itself from nyc to la in 2017” and “robotaxi will be fully functional in 2021” type of statements? It’s easy to understand the issues facing Tesla today with AP/FSD..

Stop over promising and under delivering…start there perhaps.
 
But that's not the issue .. the contract isnt "pay attention WHEN AP aborts", it's "pay attention ALL THE TIME".
That's what I don't really get - why did the drivers leave AP on in an emergency situation in the first place?

To me, having AP just abort seems risky. Even if it aborts 5 or more seconds before a crash, is that enough time for the driver to realize what's happening and take over - considering the apparent lack of attention up to that point? If the driver is truly zoned out, aborting AP might be more dangerous than keeping control and trying to avoid or minimize the problem. I think it should start warnings at the first sign of trouble, and get more aggressive with warnings if the driver doesn't take control. At that point, the safest thing might be to slow down a lot, because the driver could be asleep or worse.
 
  • Like
Reactions: Phlier and DeepFrz
So, is it the technology's fault, or the driver's fault? If the technology has a flaw that's causing accidents without the driver's ability to avoid it, then I'm behind the NHTSA investigation and a recall. For example: AP is engaged and heading directly at something, like a crashed vehicle or emergency vehicle, and the driver attempts to take over to avoid it but AP refuses to disengage... That's a serious problem.

If this is just stupid humans, not paying attention, then how is it different than any stupid human driving? Countless videos of idiot humans texting and driving, plowing into other cars. Hell, I personally got rear ended years ago at a red light by an idiot claiming he dropped jelly beans and bent over to pick them up and didn't see the red light or me. Why should Tesla be punished for those idiots?
There are two issues here:

First, is that AP was running into stopped vehicles on the road. This should not happen, of course. AP (TACC) is supposed to react to other vehicles. If it does not provide more than a one second of corrective action when approaching a stopped car, then it needs work.

Second is the issue of inattentive drivers, who should have taken control long before the car crashed. The issue is not that there are drivers who will do dumb things, like watch videos on their phones while driving. The issue is whether the car has suitable safety measures to ensure attentiveness. Modern cars need safety measures that offset the tendency of drivers to pay less attention to the road once when using features like lane keeping.

My suspicion is that Tesla will ultimately need to require head/eye tracking with all ADAS functions, not just FSD beta. That should be fairly straightforward. The problem is going to be how to deal with those models that have no interior camera.
 
The other thing. I don’t care what any manufacturer says…unless it’s explicit in writing that they will accept full responsibility for any collision (caused by their system)? Then I’m not fully trusting it. Makes no sense to fully trust it. Elon can talk all the mess he wants, but until they are confident they can accept responsibility? I’m going to be holding myself accountable for anything that happens when I’m behind the wheel regardless of what mode it’s in.

Hell, The latest version of FSD beta just ran up on a curb and damaged Black Teslas rim on his channel. In broad daylight.

This technology is NOWHERE close to offering a “robotaxi” service UNLESS it’s in a TIGHT very limited geofenced place. And it’s NOWHERE REMOTELY close to driving across the country with no one in the car. And NO ONE is going to tell me that Elon believed otherwise.
 
  • Like
Reactions: BrerBear
The NHTSA mission statement:
Our mission is to save lives, prevent injuries, and reduce economic costs due to road traffic crashes, through education, research, safety standards, and enforcement.
Note that they don't make an exception for "stupid" drivers. The reason for this is that the vast majority of crashes are caused by "stupid" drivers.
 
AP (and FSD beta), have always been L2. Even if you say it’s just a legal detail, it’s a detail intended in separating L2 from L3 systems. It hasn’t been advertised as anything else. AP is the term used for the current capabilities, not FSD. As far as I can tell (and many have questioned this), I don’t see Tesla ever selling a current feature/system as FSD, only AP. The future promise package of “FSD Capability” includes more current features, but they’ve never been called FSD. FSD beta is the closest thing we currently have to FSD, but even that’s not for sale. It has only been called a “limited early access beta”, but that’s not even what this PE (and now EA) is about.

This leads me again to the deeper root issue: we as a society don’t have the language and therefore we don’t have the understanding of what exists between a car with “cruise control + lane keep assist”, and “Full Self Driving”. The most advanced feature suite currently included in AP for Tesla owners is much more advanced than “cruise control + lane keep assist”, but there’s a large gap of abilities between that and fully self driving. Heck, there’s a large gap of what can exist between L3 & L4! We need commonly understood language for that during this in between phase and we just don’t have it. Until then, we will have misunderstandings and lawsuits.
 
First, is that AP was running into stopped vehicles on the road. This should not happen, of course. AP (TACC) is supposed to react to other vehicles. If it does not provide more than a one second of corrective action when approaching a stopped car, then it needs work.

Second is the issue of inattentive drivers, who should have taken control long before the car crashed. The issue is not that there are drivers who will do dumb things, like watch videos on their phones while driving. The issue is whether the car has suitable safety measures to ensure attentiveness. Modern cars need safety measures that offset the tendency of drivers to pay less attention to the road once when using features like lane keeping.

My suspicion is that Tesla will ultimately need to require head/eye tracking with all ADAS functions, not just FSD beta. That should be fairly straightforward. The problem is going to be how to deal with those models that have no interior camera.
Generally I agree, but I'm not sure why it's the cars responsibility to make sure the driver is paying attention. We've had plain-old cruise control for decades, and no-one was saying the car should ensure the driver was alert at all times.

The problem here is that its easy to conflate driver assist with driver substitution, and also capability with responsibility. Sure, we have the technology to make sure drivers are paying attention to the road .. that's a useful and valuable capability to assist in improving road safely. But does it shift the responsibility to pay attention from the driver to the car? Can the driver say "the car was SUPPOSED to remind me when something bad was about to happen"?. And even if he/she does say that, what does the law say?

What if we developed a device to detect drunk drivers? Can a driver who was pulled over for causing an accident while drunk claim it was the cars fault for letting him drive impaired? Can a burglar who steals stuff from your unlocked car claim it was your fault for leaving the car unlocked? Or the car maker for not supplying strong enough locks?

My feeling is the answer to these questions is all the same .. its the perpetrators responsibility, not the technology for allowing it to happen. So why are things like AP/FSD being treated differently? Partly, no doubt, because of some of the publicity and hype around features like AP/FSD, leading drivers to over-estimate the capabilities of the technology. But that's just another excuse .. the manual is VERY clear on what the car can and cannot do .. it even EXPLICITLY says that the car will not brake for parked or immobile vehicles. If you buy something that's potentially lethal (and every car is that), don't read about how to use it safely, and end up killing someone, you can't plead ignorance as an excuse. You acted irresponsibly, plain and simple. After all, that's why we have driving tests in the first place!

The NHTSA is charting new territory here, and very murky waters. I'm sure they know it, and I'm sure auto-makers are watching very carefully. Sure, they might think Tesla put their head in a noose, but they all have similar self-driving aspirations, and are being very careful about how they may be impacted by anything coming from the feds.
 
  • Like
Reactions: Phlier
So, is it the technology's fault, or the driver's fault? If the technology has a flaw that's causing accidents without the driver's ability to avoid it, then I'm behind the NHTSA investigation and a recall. For example: AP is engaged and heading directly at something, like a crashed vehicle or emergency vehicle, and the driver attempts to take over to avoid it but AP refuses to disengage... That's a serious problem.

If this is just stupid humans, not paying attention, then how is it different than any stupid human driving? Countless videos of idiot humans texting and driving, plowing into other cars. Hell, I personally got rear ended years ago at a red light by an idiot claiming he dropped jelly beans and bent over to pick them up and didn't see the red light or me. Why should Tesla be punished for those idiots?
This is a false dichotomy; there's plenty of room between those two positions. Even very smart people do stupid things from time to time. What's more, inattentiveness isn't necessarily stupid; it can be caused by fatigue, distractions, etc. Blaming the "stupid human" has historically been used by opponents of safety features that have saved countless lives over the years. Just in the automotive realm, automakers fought tooth and nail against improvements like air bags and even seat belts, insisting that accidents were caused by "stupid humans," and that safe drivers (which everybody thinks means them) don't die in car crashes. Slowly but surely, though, the introduction of these safety features, as well as laws requiring their use, have dramatically improved automotive safety over the decades.

Furthermore, and very importantly, the people at risk from "stupid humans" doing stupid things with Autopilot aren't just the stupid people themselves -- it's other people. If I'm stopped at the side of the road because my car has broken down and a "stupid human" is driving a Tesla with Autopilot active, but is not paying sufficient attention, and if that Tesla slams into me or my car, then I'm injured, and possibly killed, because of somebody else's error. Government regulation to minimize such events is perfectly warranted, just as it is in other realms -- we require doctors to have medical degrees before they're permitted to operate on patients or prescribe medicines; we have electrical codes establishing safe practices in home wiring to prevent homes from burning down because of bad wiring; and so on. Autopilot-level driver assistance systems are still relatively new and we have little hard safety data on them, so regulations have lagged the development of the technology. It's time that the relevant government agencies at least begin to look into drafting regulations.
 
  • Disagree
  • Like
Reactions: S4WRXTTCS and EVNow
The NHTSA should force Tesla to add dumb cruise control and then we will be able to compare the collision rate of dumb cruise control vs. Autopilot.
I guess Tesla already has the collision rate of TACC vs. TACC and Autosteer. That would be interesting to know.

63E0AE4A-EBF2-4A27-B8EC-8E622A1BA74F.png
 
I support a COVID ban. This chart needs to be normalized to the number of people susceptible to COVID (everyone), the number of guns in circulation, and the number of cars with Autopilot. :p
Seriously though I’m just curious what a rigorous study will show about the rate and severity of Autopilot crashes. What if it shows that it improves safety overall but not for emergency responders?
A lot of people seem to want to treat Autopilot like we do guns.
 
  • Like
Reactions: SilverString
This is a false dichotomy; there's plenty of room between those two positions. Even very smart people do stupid things from time to time. What's more, inattentiveness isn't necessarily stupid; it can be caused by fatigue, distractions, etc. Blaming the "stupid human" has historically been used by opponents of safety features that have saved countless lives over the years. Just in the automotive realm, automakers fought tooth and nail against improvements like air bags and even seat belts, insisting that accidents were caused by "stupid humans," and that safe drivers (which everybody thinks means them) don't die in car crashes. Slowly but surely, though, the introduction of these safety features, as well as laws requiring their use, have dramatically improved automotive safety over the decades.

Furthermore, and very importantly, the people at risk from "stupid humans" doing stupid things with Autopilot aren't just the stupid people themselves -- it's other people. If I'm stopped at the side of the road because my car has broken down and a "stupid human" is driving a Tesla with Autopilot active, but is not paying sufficient attention, and if that Tesla slams into me or my car, then I'm injured, and possibly killed, because of somebody else's error. Government regulation to minimize such events is perfectly warranted, just as it is in other realms -- we require doctors to have medical degrees before they're permitted to operate on patients or prescribe medicines; we have electrical codes establishing safe practices in home wiring to prevent homes from burning down because of bad wiring; and so on. Autopilot-level driver assistance systems are still relatively new and we have little hard safety data on them, so regulations have lagged the development of the technology. It's time that the relevant government agencies at least begin to look into drafting regulations.
You're correct that the technology is new, and will ultimately save more lives, but it needs to evolve organically. If the technology is unsafe, then let's regulate it. If the technology is new and evolving, but people are abusing it, then we make reasonable changes to safeguard people, such as cameras and wheel nags. But if people still bypass safety features, and again become complacent or distracted, then the technology is in danger of being curtailed, putting more people at risk.

I just don't want to see progress halted because overreactive media and hyper-litigious people who need somewhere to go with their grief lash out at what they perceive is dangerous technology.

I'm all for safety, but we're getting a little out of control with reason and personal responsibility. For example having to put "Caution - Contents Hot" on a cup of coffee...