Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

NHTSA Close to forcing recall of FSD after more crashes

This site may earn commission on affiliate links.
I'm all for safety, but we're getting a little out of control with reason and personal responsibility. For example having to put "Caution - Contents Hot" on a cup of coffee...
The McDonald's "hot coffee" lawsuit is much maligned, but also much misunderstood....

 
Generally I agree, but I'm not sure why it's the cars responsibility to make sure the driver is paying attention. We've had plain-old cruise control for decades, and no-one was saying the car should ensure the driver was alert at all times.
Because if you fell asleep, you still ran off the road and/or hit something before the car slowed down very much.
 
The McDonald's "hot coffee" lawsuit is much maligned, but also much misunderstood....

I didn't mention McDonald's. Most restaurants near me have the warning on their cups. It's as if society is absolving itself of common sense. We're taught by our parents that knives are sharp and stoves are hot. And we're taught how to drive by adults and handle large, dangerous machines.

But instead of taking responsibility and accepting a potential manslaughter charge because someone was on their phone, or turning around to deal with an unruly passenger in the back seat, taking their eyes off the road, they try to obfuscate by claiming the technology was to blame. I get it. I wouldn't want to be arrested and charged either. Easy to say "but autopilot was engaged, it wasn't my fault".
 
  • Like
Reactions: drtimhill
The hell?!? 🤦

Queue the safety warnings in cars about not having sex while in them.
Geico appealed.

Geico lost the appeal.
 
So, is it the technology's fault, or the driver's fault? If the technology has a flaw that's causing accidents without the driver's ability to avoid it, then I'm behind the NHTSA investigation and a recall. For example: AP is engaged and heading directly at something, like a crashed vehicle or emergency vehicle, and the driver attempts to take over to avoid it but AP refuses to disengage... That's a serious problem.

If this is just stupid humans, not paying attention, then how is it different than any stupid human driving? Countless videos of idiot humans texting and driving, plowing into other cars. Hell, I personally got rear ended years ago at a red light by an idiot claiming he dropped jelly beans and bent over to pick them up and didn't see the red light or me. Why should Tesla be punished for those idiots?
There's only a certain amount of blaming you can put on the operators until it starts to become a design flaw, if many operators are making the same mistakes. Let's take another example: RADAR detectors. The first RADAR detectors were not RADAR detectors at all, but merely RF energy detectors. If they detected any energy in the RF band at certain frequencies, they'd alert. But some RADAR frequencies are not exclusive to police RADAR, and are used by door openers and, later, blind spot monitoring systems, especially in K band. So a lot of RADAR detectors started giving drivers a ton of false alarms. When you have a situation where 499 out of 500 K band alerts don't have anything to do with police RADAR, it's hard to blame the driver for ignoring the 500th. And the solution is to make a smarter detector that can differentiate between police RADAR and blind spot monitoring systems and door openers. Likewise, if there's a nuclear power plant which has an alarm that keeps giving false alarms 5 times a day, can you really blame the plant operators if they ignore it for the 5000th time and the plant has an incident? I'd blame the designers for not giving them an alarm that only alerts them when there's actually something that needs attention, because the way our brains work is that if we're not engaged with an activity, we start to tune out and think about other stuff, and if we're constantly being pestered with false alarms, we start to ignore them.
 
I didn't mention McDonald's. Most restaurants near me have the warning on their cups. It's as if society is absolving itself of common sense. We're taught by our parents that knives are sharp and stoves are hot. And we're taught how to drive by adults and handle large, dangerous machines.
You did mention "Caution - contents hot" notices on coffee cups, which are probably a result of the McDonald's lawsuit, in a CYA attempt.
But instead of taking responsibility and accepting a potential manslaughter charge because someone was on their phone, or turning around to deal with an unruly passenger in the back seat, taking their eyes off the road, they try to obfuscate by claiming the technology was to blame. I get it. I wouldn't want to be arrested and charged either. Easy to say "but autopilot was engaged, it wasn't my fault".
I'd rather have technology that doesn't kill me because somebody else doesn't use it "properly."
 
This is a false dichotomy; there's plenty of room between those two positions. Even very smart people do stupid things from time to time. What's more, inattentiveness isn't necessarily stupid; it can be caused by fatigue, distractions, etc. Blaming the "stupid human" has historically been used by opponents of safety features that have saved countless lives over the years. Just in the automotive realm, automakers fought tooth and nail against improvements like air bags and even seat belts, insisting that accidents were caused by "stupid humans," and that safe drivers (which everybody thinks means them) don't die in car crashes. Slowly but surely, though, the introduction of these safety features, as well as laws requiring their use, have dramatically improved automotive safety over the decades.

Furthermore, and very importantly, the people at risk from "stupid humans" doing stupid things with Autopilot aren't just the stupid people themselves -- it's other people. If I'm stopped at the side of the road because my car has broken down and a "stupid human" is driving a Tesla with Autopilot active, but is not paying sufficient attention, and if that Tesla slams into me or my car, then I'm injured, and possibly killed, because of somebody else's error. Government regulation to minimize such events is perfectly warranted, just as it is in other realms -- we require doctors to have medical degrees before they're permitted to operate on patients or prescribe medicines; we have electrical codes establishing safe practices in home wiring to prevent homes from burning down because of bad wiring; and so on. Autopilot-level driver assistance systems are still relatively new and we have little hard safety data on them, so regulations have lagged the development of the technology. It's time that the relevant government agencies at least begin to look into drafting regulations.

What I'm curious about is whether these incidents are simply a result of the statistical likelihood of an accident given a large enough sample size. Where things like how well it works, the install base, the miles traveled under AP, etc all contributed to the number on incidents.

A lot of these accidents occurred with AP1 vehicles and the Radar + Single Camera + MobileEye (or some other ADAS provider) wasn't really all that different than any other L2 manufacture. A lot of these accidents occurred on divided highways where the HW was designed to work. It's also been revealed that quite a few of these cases were the result of purposeful misuse of the technology. Things like using the technology as an excuse to drive while under the influence.

Now I'm not saying I wouldn't investigate it the way the NHTSA is, but just that there are a lot of details that need to be accounted for.

I'm not also saying this to defend Tesla.

I think it was a massive mistake not to have proper driver monitoring with the release of HW2 or above. Sure I can understand making the torque sensor mistake back in 2015, but not 2017+. Elon constantly underestimates how lousy humans are. That idiots are all around us ruining everything they use.

I also feel like the way L2 is regulated is wrong. Right now all the liability really falls on the driver, but L2 systems are growing in capability. I feel like there is a point where the driver is so far removed that the liability has to be shared. This will push manufactures to deliver better system, and more clarity on the capabilities of the systems.

Europe took the approach of limiting L2 systems which I also don't agree with as that just relies more on the human driver. That might work fine for good European drivers, but US drivers are crappy drivers. The last thing we need is more US drivers doing the driving. :)
 
Tesla's cabin monitoring camera and system is far behind what other manufacturers such as Ford and GM are using. How they fix it without a major upgrade and recall I do do know... Steering wheel rotational sensing is clearly insufficient. I'm with the NHTSA on this one.
To be with one you need to know what position they take.

The NHTSA hasn't issued any ruling.

As to the torque sensor I've been strongly opposed to it for the job of driver monitoring since 2015. I'm with the Europeans as they're requiring driving monitoring for new cars starting in 2024.

I don't believe the NHTSA will force Tesla do any major recall.

Instead I think it will be more like this:
  • Tesla will introduce improved driver monitoring in new vehicles. Even the new S/X have improved IR lighting than what was before.
  • Tesla will continue to improve the emergency vehicle detection and slowing down for them with HW3 vehicles.
  • Tesla won't be forced to recall older AP1 Models that simply don't have the means to be upgraded to anything newer that would prevent this.
In summary I don't expect much of anything to come from the NHTSA ruling. There is just too many safety benefits of AP despite the shortcomings. The low hanging fruit is all that's going to be required.
 
  • Like
Reactions: Phlier and DJT1
This is a false dichotomy; there's plenty of room between those two positions. Even very smart people do stupid things from time to time. What's more, inattentiveness isn't necessarily stupid; it can be caused by fatigue, distractions, etc. Blaming the "stupid human" has historically been used by opponents of safety features that have saved countless lives over the years. Just in the automotive realm, automakers fought tooth and nail against improvements like air bags and even seat belts, insisting that accidents were caused by "stupid humans," and that safe drivers (which everybody thinks means them) don't die in car crashes. Slowly but surely, though, the introduction of these safety features, as well as laws requiring their use, have dramatically improved automotive safety over the decades.

Furthermore, and very importantly, the people at risk from "stupid humans" doing stupid things with Autopilot aren't just the stupid people themselves -- it's other people. If I'm stopped at the side of the road because my car has broken down and a "stupid human" is driving a Tesla with Autopilot active, but is not paying sufficient attention, and if that Tesla slams into me or my car, then I'm injured, and possibly killed, because of somebody else's error. Government regulation to minimize such events is perfectly warranted, just as it is in other realms -- we require doctors to have medical degrees before they're permitted to operate on patients or prescribe medicines; we have electrical codes establishing safe practices in home wiring to prevent homes from burning down because of bad wiring; and so on. Autopilot-level driver assistance systems are still relatively new and we have little hard safety data on them, so regulations have lagged the development of the technology. It's time that the relevant government agencies at least begin to look into drafting regulations.
I wonder how many times AP has hit an emergency vehicle compared to manual human drivers? You say we need government regulations of AP to stop this stuff happening .. what about the humans that do the same thing? What if, statistically, humans do this more often than AP or AP-like systems? Are you proposing we ban/cripple a system that is SAFER than a human and replace it with (less safe) human drivers?

I'm not saying we ARE at this point, but there is clearly a big blind spot here .. we are so used to humans crashing that we simply are numb to this issues But any autonomous car mistake is an instant hysterical headline.

Remember when air bags were new? There were a few nasty and tragic accidents with them. People started shouting that they were a menace, some even disconnecting them in their cars .. totally missing that for every nasty accident that caused an injury there were hundreds where the air bags SAVED people.

And government regulation is often a complete mess anyway. Ever wondered why school buses dont have seat belts? Go read up on that!
 
  • Like
Reactions: pilotSteve
Remember when air bags were new? There were a few nasty and tragic accidents with them. People started shouting that they were a menace, some even disconnecting them in their cars .. totally missing that for every nasty accident that caused an injury there were hundreds where the air bags SAVED people.
I remember that they changed the regulations and implemented dual stage airbags so that they would stop killing people.
I wonder how many times AP has hit an emergency vehicle compared to manual human drivers?
I assume that will be in the report.
Not for NewZealand (or China).

Anyway - the comparison is silly to you only because it exposes the hypocrisy of Tesla haters.
I wasn’t talking about only financial cost. China is the only COVID zero country left and it is very costly to maintain. If we could control every human’s behavior it would be easy to achieve zero COVID* and we could also eliminate almost all collisions with or without Autopilot.

*not really because it’s in many animals now too
 
  • Like
Reactions: AlanSubie4Life
We have the interior camera monitoring my gaze in one car (with FSD beta). I’d say the system works “okay”. All of these systems can be defeated, because… humans. 😂 I’d say go ahead and turn this on for all Teslas with cabin cameras (older S & X don’t), but that’s still not going to guarantee anything. Maybe it’ll help/improve attentiveness? Depends on how intent people are on defeating such babysitters: It's Not Just Tesla: All Other Driver-Assist Systems Work without Drivers, Too
I had the beta, I’d say the cabin camera monitoring works pretty well, surprisingly well. When I had the beta, even the “old” highway NoA code was using the cabin cam. If I fiddled with music, took my eyes off the road, it would warn you. Worked pretty well.
 
That's at least part of the issue, though. Tesla's current approach to ensuring that drivers actually pay attention is easily defeated by hanging a weight on the steering wheel. We've all seen the stories (some well-documented, with video) of people who take naps on the highway while in the driver's seat of a Tesla, and at least some of the incidents that the NHTSA is investigating are probably caused by such inattention. Some people do actively attempt to defeat the system. Others become too reliant on it, letting their attention lapse when it shouldn't.

The problem is, as soon as you make something idiot-proof, nature creates a better idiot. Every minute spent trying to make an ADAS system more idiot-proof is a minute that wasn't spent making it better at driving.

Yes, it sucks that people who do use Autopilot properly may (if things go very badly) cause it to be taken away from those who do use it properly. But do you know what sucks worse? Having your life taken away because somebody didn't use Autopilot properly. IMHO, the NHTSA's investigation is well-justified. Some other automakers have more sophisticated driver-monitoring technologies than what Tesla uses.

That's not actually a legitimate complaint. Some other automakers have less sophisticated lane-keeping than Tesla, too. Before they recall something that can only be dangerous when deliberately abused, they should recall all the Nissan systems that disengage at every effing cross street, which can be quite dangerous even when not abused. The mere fact that someone else does something better is not grounds for a recall. It has to either be substantially dangerous (low double-digit accidents out of millions of cars on the road does not qualify, particularly when balanced against the much larger number of accidents prevented) or in violation of some law or other established standard (which AFAIK, this isn't).

One potential piece of good news is that Tesla has developed a second attention-verifying technology, which uses the interior camera to monitor the driver. This is already being used, although if I understand correctly it's only being used in the latest FSD beta builds.

And is an inherent privacy concern. People do other things in their car besides driving. What do you do for the folks who tape over their interior cameras? Disable ADAS? That's fine for an optional feature like FSD Beta. It's not fine for something that people paid many, many thousands of dollars for.

There are two issues here:

First, is that AP was running into stopped vehicles on the road. This should not happen, of course. AP (TACC) is supposed to react to other vehicles. If it does not provide more than a one second of corrective action when approaching a stopped car, then it needs work.

This. And there's every reason to believe that the technology has changed pretty significantly since most of those accidents occurred.

Some of them were on AP1. These are utterly irrelevant, given that AP1 is a technological dead end. They're also not readily fixable short of forcing Tesla to gut those systems and replace them with a newer setup, e.g. taking the HW2.5 hardware pulled from FSD upgrades, adding custom front-camera-only firmware, and retrofitting them with a custom wiring harness.

The rest are based on ancient versions of the software stack, and there's not necessarily any reason to believe that the same accidents would have occurred on the current stack. And when FSD Beta stops being beta, there's even less reason to believe that it would have happened. And every hour of engineering effort wasted on making the current stack better to prevent that is one hour that wasn't spent making the newer, more capable stack ready to be deployed for highway driving sooner, which will fix a lot more problems than just these.
 
AP (and FSD beta), have always been L2. Even if you say it’s just a legal detail, it’s a detail intended in separating L2 from L3 systems. It hasn’t been advertised as anything else. AP is the term used for the current capabilities, not FSD. As far as I can tell (and many have questioned this), I don’t see Tesla ever selling a current feature/system as FSD, only AP. The future promise package of “FSD Capability” includes more current features, but they’ve never been called FSD. FSD beta is the closest thing we currently have to FSD, but even that’s not for sale. It has only been called a “limited early access beta”, but that’s not even what this PE (and now EA) is about.
I'm confused. My Tesla, like all others, came with Auto Pilot. Currently, you can choose to pay and extra $12,000 for "Full Self-Driving Capability" (their exact wording). It says nothing about beta. When I bought mine, I was told I could add FSD for $10K extra, but was told it wasn't nearly ready for prime time, and I probably shouldn't get it, which I had already said I was not.

I'm not sure what you are referring to, but it has always been called FSD, and the beta started later.
 
  • Like
Reactions: 2101Guy