Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Bloomberg Tesla Autopilot Accident Story Today

This site may earn commission on affiliate links.
I cannot disagree with the meaning of the term "auto-pilot" as not meaning an autonomous controller, as it is used in some drone aircraft. However, the term can be misleading and it seems to me that at least some of the accidents that occurred while driving in AP mode would not have occurred had this feature been named "Driver assist." I am a pilot with two long distance speed records.

The term is absolutely misleading and needs to be changed. It's a great marketing tool, but it's clearly contributing to the false sense of security. Frankly, unless you've completely downed liters of the Tesla Kool-Aid, I'm not sure how you could argue otherwise. Yes, we should all be vigilant on AP at all times, but this is a mass market vehicle with a brand new technology. Different standards should apply, especially for a company like Tesla which prides itself on meeting or exceeding safety benchmarks.
 
I cannot disagree with the meaning of the term "auto-pilot" as not meaning an autonomous controller, as it is used in some drone aircraft. However, the term can be misleading and it seems to me that at least some of the accidents that occurred while driving in AP mode would not have occurred had this feature been named "Driver assist." I am a pilot with two long distance speed records.

To be honest, I don't think it matters what you call it. If you demonstrate a machine that does something "right" 99 out of 100 times but on that other 1 time does something disastrously wrong, some people will be tempted to believe it to be better than it really is, and be lured into a false sense of security.

That's the fundamental human-behavior problem Tesla is struggling with right now. Humans tend to know when they're doing something bad (whether it's texting and driving, consuming too much alcohol / smoking, eating unhealthy foods, or not paying attention when Autopilot is active), but do it anyway, and still want to be entitled to acting like the victim when something bad happens.
 
  • Like
Reactions: PAULL
Can someone please explain to me why Autopilot drove into the guardrail multiple times and it didn't stop after the first impact???

With all the sensors it has, it should slow down on the bend to avoid hitting the guardrail. If it's really unavoidable, it should slow down to stop after the first impact.
 
  • Like
Reactions: dehydratedH2O
Can someone please explain to me why Autopilot drove into the guardrail multiple times and it didn't stop after the first impact???

With all the sensors it has, it should slow down on the bend to avoid hitting the guardrail. If it's really unavoidable, it should slow down to stop after the first impact.

My initial assumption there is that Autopilot likely disengaged in the chaos and some of the characterization is unreliable witness testimony. Even in the Joshua Brown case a bystander claimed the car continued to accelerate after the impact when the NTSB said the car shut off after the impact and was coasting.
 
  • Like
Reactions: MP3Mike
I really don't like Dana Hull any longer now that she reports for Doom-berg
I thought the article was reasonable and balanced. The author also interviewed a Model X owner who credited AP with avoiding a collision and noted the recent incident where a man in acute medical distress credited AP with helping him get to the hospital faster than if he had waited for an ambulance.
 
  • Like
Reactions: GSP and bonnie
Can someone please explain to me why Autopilot drove into the guardrail multiple times and it didn't stop after the first impact???
Because AP disengages on impact and because no doubt the driver also disengaged it by belatedly grabbing the wheel and hitting the brakes. You seem to think that AP capabilities are far beyond where Tesla clearly states they are.

What I would like to know is why using three question marks [???] is better than a single question mark. It's not.
 
  • Like
Reactions: SW2Fiddler
"The car didn’t stop -- it actually continued to accelerate after the first impact into the guardrail."

Unbelievable. What was the driver doing? Even after impact he doesn't bother to take over and hit the brakes?

I knew a kid in high school who wrecked his car when reaching over for something on the passenger side, so this type of crash isn't new to autopilot. It just makes the news now, where you'd never read about the dumb driver before.
 
  • Helpful
Reactions: SW2Fiddler
Chubb may sue because Chubb is one of the few that offers a stated value policy (Allstate is another). With such a policy, if totaled, they cut a check to the insured for $120K or whatever the invoice and aftermarket amounts come to.

They are, potentially, more exposed, in other words.

Not saying that the car above is a total loss. Still reading the article.
 
That's a bit of a logical fallacy. There's no promise made that AP will behave in the exact same way each time you take a road regardless of circumstances. Every time you decide to not supervise AP closely (e.g. be prepared to take over immediately), you are wagering on a bet that AP will behave correctly on that stretch of road.

Maybe the point was that people are human and we tend to make patterns where patterns may not exist, so this situation could happen to anyone (and maybe we should be more sympathetic). The driver did admit his wrongdoing, but was obviously shaken by the experience and noted the human psychology factor here: it gave him a false sense of security.

Considering how many folks here have admitted to reading/checking e-mails, etc while on AP, I am sure many of them are also driving around with a false sense of security. I know if AP is behaving well while I am driving it, I might be tempted to do something rash, like take a few seconds to read a text message when it beeps on my phone. I won't perceive that action as rash and dangerous - but it is.

The question then becomes - is Tesla responsible for mitigating the human psychology factor that seems to be resulting from this technology? For another example that doesn't involve Tesla: is Jeep responsible for mitigating the human psychology factor of their shifter design that has caused several roll-away accidents? Some may argue no, those folks driving those Jeeps just need to know to be careful when putting their car into park.

I think in the end Tesla will have to take some action to address the psychology factor (just like Jeep did). I don't know what form it will take, but I am guessing AP will become more limited than it is now.
 
Last edited:
I don't have Autopilot, but I find this infuriating.



THEN WHY DID YOU USE A BETA FEATURE WHERE YOU SPECIFICALLY AGREED TO BE A "TEST PILOT"?!?!?

Probably because there's a difference between alpha and beta.

Alpha = test pilot.

Beta = consumer-ready, with non-lethal implications.

People who pilot floaty things or flying things tend to understand what autopilot is and is not. More importantly, it is ingrained in us to never rely upon a single source for navigation, and that in the end, we as captains/pilots are responsible for getting from point A to point B without loss of life, injury, or property damage.

With no disrespect to the public at large, but if you put 100 people off the street into a room and ask them the difference between autonomous, autopilot, and driver assist features, y'all will get a lot of blank stares. When they look up from texting, that is.

Change the damn name to DriverAssist. You can bet Ford or Subaru or somebody will. Thing is, Tesla can't now because they're already committed. And with credit to Tesla, they have led the way with the long view. Remember that in barely 5 years, the entire driving landscape will have changed. Remember JohnnyCab from Total Recall? Well, it's coming.
 
  • Like
Reactions: Naonak
Probably because there's a difference between alpha and beta.
Alpha = test pilot.
Beta = consumer-ready, with non-lethal implications.
Not really sure I agree. Those are basically just statements about how you interpret those two labels. Like I said, I don't have an AP car, but I have seen the warnings and statements you have to agree to when enabling AP. They DO NOT state that there is no danger in using the software, as you claim Beta implies.

Indeed, one stated requirement is that you have to keep your hands on the wheel. The name of the feature doesn't change the fact that the driver agreed to test and use a feature that's still under heavy development, before it was ready and out of beta, and then later complained that he didn't want to be a beta tester for that feature. The mind boggles.

Still, for the sake of argument, had Tesla called it Alpha instead of Beta with literally everything else the same, do you think we'd be in the same situation here? I do. He ignored the warnings and agreements required to enable AP. Do you think seeing Alpha instead of Beta would have stopped him?
 
  • Like
Reactions: dehydratedH2O
Not really sure I agree. Those are basically just statements about how you interpret those two labels. Like I said, I don't have an AP car, but I have seen the warnings and statements you have to agree to when enabling AP. They DO NOT state that there is no danger in using the software, as you claim Beta implies.

Indeed, one stated requirement is that you have to keep your hands on the wheel. The name of the feature doesn't change the fact that the driver agreed to test and use a feature that's still under heavy development, before it was ready and out of beta, and then later complained that he didn't want to be a beta tester for that feature. The mind boggles.

Still, for the sake of argument, had Tesla called it Alpha instead of Beta with literally everything else the same, do you think we'd be in the same situation here? I do. He ignored the warnings and agreements required to enable AP. Do you think seeing Alpha instead of Beta would have stopped him?

I suspect this comes down to informed consent. Can people really give informed consent by clicking a button a screen that this feature might kill them if they use it improperly? What about other drivers of the car? If I enable AP on my car, but then my husband borrows it and tries out AP, he didn't have to sign or read anything. Is that my responsibility to fully educate all drivers of my vehicle about the risks of the beta feature that I enabled? I think it is, but I am not sure that agree button fully conveys the level of responsibility you are accepting.

Tesla is breaking new ground here with beta testing something that you can be killed in. It probably should be controlled more like a medical trial consent than an iPhone software beta consent.
 
Not really sure I agree. Those are basically just statements about how you interpret those two labels. Like I said, I don't have an AP car, but I have seen the warnings and statements you have to agree to when enabling AP. They DO NOT state that there is no danger in using the software, as you claim Beta implies.

Indeed, one stated requirement is that you have to keep your hands on the wheel. The name of the feature doesn't change the fact that the driver agreed to test and use a feature that's still under heavy development, before it was ready and out of beta, and then later complained that he didn't want to be a beta tester for that feature. The mind boggles.

Still, for the sake of argument, had Tesla called it Alpha instead of Beta with literally everything else the same, do you think we'd be in the same situation here? I do. He ignored the warnings and agreements required to enable AP. Do you think seeing Alpha instead of Beta would have stopped him?

Alpha releases are not made available to the general public.
 
Story is telling. As the guy used the same road every time with 0 problems he was not paying much attention. Isn't this something that could have happened to anyone?

Sure it is. Anyone reaching WAY over to the glove box could easily snag the wheel and cause autopilot to dis-engage. I'd bet good money this is exactly what happened(as already suggested).