Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
I'm sorry, I must disagree.

My point, in softer terms, is that even if you think you understand the risks rationally (which is impossible, as AP is a black box system with unknown, ever changing behavior) your body cannot do what is necessary to contain the risk.

Are you overweight? Ever have a cavity? A speeding ticket? Ever forget to go to the gym? Don't meditate for hours a day? Don't pop Ritalin like candy? Then you don't have what it takes to use Autopilot correctly.

Autopilot reminds me a lot of weight loss scams which promise you'll lose an impossible amount of of weight "or your money back." Only nobody receives their money back, because they didn't "comply with the program." The program, of course, makes starvation demands that most humans cannot meet. And yet the humans are blamed, never the impossible program (scam). People fork over their money thinking, "I can do that." Yet they cannot.

The rational mind lacks perfect control over the unconscious and reptillian modules of the brain. You may want to pay attention, but that does not always happen. In fact, you likely don't remember the cases when it fails. And with AP, the failures are catastrophic.

"Understanding" Autopilot is not sufficient. Perfect, unwavering execution by the driver is everything, and that is unreliable. That is what most AP users do not understand. That is the true risk of using Autopilot.

The wisest realize that their bodies are not perfect, and don't tempt fate by enabling AP. I'll take a little more exercise for my brain, or even a minor collision, to avoid a black swan failure which puts me in the morgue.

Having read through your posts I would definitely say you are not a candidate to use EAP/Autosteer in particular. Good thing you realize it. It's not for everyone but I'd say many here find it very useful and don't have an attention span/control issue when using it. You can still option for EAP and not use Autosteer until you feel it's ready for you or not order it on your car at all. It's not FSD though. I have no doubt there will be car shoppers who regardless of the manufacturer's driver assist features will not want them at all.

From what I can tell it sounds like you think you are a perfect driver without any driver assist features, trusting only your own faculties and having the attention span all the time to safely drive every time through that accident scene or many other confusing road situations.
 
Last edited:
Oh this is interesting. Here say.
"The driver's brother told a reporter that Huang had previously complained the car would swivel toward that exact barrier and had complained to the Tesla dealership about it, but that they could not replicate the issue."

IF as claimed by the brother that the driver really made that statement, then he sure didn't help his brother's cause (in the accident). Maybe he thinks he can place the blame on Tesla for the accident and death by saying his brother reported this problem previously, but in reality he is making me think..... ok, so your brother knew that was a problem, yet still used autopilot repeatedly in that area, didn't pay attention, and died as a result? That sounds like negligence to me.

I'm thinking the guy just made up that statement, but how is he going to prove it? And it will just backfire anyway.
 
I think there are two things that Tesla can do and probably should have done by now:

1. Make sure that auto-emergency braking works when an obstacle is in front of the car, regardless of the speed of obstacle or who is in control of the car. You can't caveat it by saying that it won't work if you do x, y or z. This is table stakes for any software that allows cars to auto-steer. I don't think its good enough to say that "we told you so in fine print" or that "the system is in Beta" or that "its a hard problem with false positives." Tesla claimed in 2016 that the cars have hardware to do full self driving. Detection of stationary obstacles should have been pretty high priority for the software.

2. If Tesla really believes that auto-pilot is in Beta and isn't saying that just to eliminate liability, it should prevent people from being able to take their hands of the wheel when auto-pilot is on. If the actual functionality is "lane-assist", have the system behave like lane-assist in setting expectations of the human driver. It practically doesn't matter that it can do 100 things better than a standard lane-assist, since the expectation from the human driver is the same, So figure out a way to enforce that expectation.
 
I think there are two things that Tesla can do and probably should have done by now:

1. Make sure that auto-emergency braking works when an obstacle is in front of the car, regardless of the speed of obstacle or who is in control of the car. You can't caveat it by saying that it won't work if you do x, y or z. This is table stakes for any software that allows cars to auto-steer. I don't think its good enough to say that "we told you so in fine print" or that "the system is in Beta" or that "its a hard problem with false positives." Tesla claimed in 2016 that the cars have hardware to do full self driving. Detection of stationary obstacles should have been pretty high priority for the software.

2. If Tesla really believes that auto-pilot is in Beta and isn't saying that just to eliminate liability, it should prevent people from being able to take their hands of the wheel when auto-pilot is on. If the actual functionality is "lane-assist", have the system behave like lane-assist in setting expectations of the human driver. It practically doesn't matter that it can do 100 things better than a standard lane-assist, since the expectation from the human driver is the same, So figure out a way to enforce that expectation.

The caveat to point #1 is no other manufacturers are even close to solving this problem with a production car on the road.
This has to be solved by multiple sensors and algorithms with image recognition I believe to be the most important.
The hardware is there, the software is being worked on.

Hundreds of thousands of Tesla's are on the road with AP and growing daily. Fatalities """""due""""" to AP is not even a yearly occurrence I believe.

I'm more stressed about how pathetic Caltrans is. Everyone (including Tesla - SHOCKING I know) knows that Tesla's work is cut out for them.
 
Oh this is interesting. Here say.
"The driver's brother told a reporter that Huang had previously complained the car would swivel toward that exact barrier and had complained to the Tesla dealership about it, but that they could not replicate the issue."

IF as claimed by the brother that the driver really made that statement, then he sure didn't help his brother's cause (in the accident). Maybe he thinks he can place the blame on Tesla for the accident and death by saying his brother reported this problem previously, but in reality he is making me think..... ok, so your brother knew that was a problem, yet still used autopilot repeatedly in that area, didn't pay attention, and died as a result? That sounds like negligence to me.

I'm thinking the guy just made up that statement, but how is he going to prove it? And it will just backfire anyway.

He's practically saying his brother intentionally tried to commit suicide via Tesla EAP.

No I don't really mean this but read carefully over what the brother stated, the entire context of the situation and see if what was said makes any sense whatsoever. Spoiler: It doesn't.
 
The caveat to point #1 is no other manufacturers are even close to solving this problem with a production car on the road.
This has to be solved by multiple sensors and algorithms with image recognition I believe to be the most important.
The hardware is there, the software is being worked on.

Hundreds of thousands of Tesla's are on the road with AP and growing daily. Fatalities """""due""""" to AP is not even a yearly occurrence I believe.

I'm more stressed about how pathetic Caltrans is. Everyone (including Tesla - SHOCKING I know) knows that Tesla's work is cut out for them.

Would you take the same stance if there was a robot operating on patients in hospitals and killed one random patient every year? The argument would be that when it works it will be much better than human doctors and anyway a lot of patients die around the world due to human doctor negligence. So, theoretically it is ok to have a robot kill a few humans due to bugs because it’s trying to solve a hard problem. How would you distinguish between negligence caused by the robot manufacturer and a genuine corner case? Remember that autopilot software you are using is not even going through FCC, DMV or any independent testing. Everyone involved has an incentive to continue to push updates and claim more functionality than it can deliver.
 
same stance if there was a robot operating on patients in hospitals and killed one random patient every year?

Excellent example because I guarantee that that already exists.

Robots can do some procedures with better outcomes than those by doctors unassisted by robots, but sometimes there is a mistake with the robot. Although like Tesla AP driver the MD is always in control and responsible.

but statistically, both the driver and the MD are better when they are assisted properly and when using properly, the AP/surgical robot.
 
  • Like
Reactions: T34ME and MXWing
Would you take the same stance if there was a robot operating on patients in hospitals and killed one random patient every year? The argument would be that when it works it will be much better than human doctors and anyway a lot of patients die around the world due to human doctor negligence. So, theoretically it is ok to have a robot kill a few humans due to bugs because it’s trying to solve a hard problem. How would you distinguish between negligence caused by the robot manufacturer and a genuine corner case? Remember that autopilot software you are using is not even going through FCC, DMV or any independent testing. Everyone involved has an incentive to continue to push updates and claim more functionality than it can deliver.

I do not understand what the FCC or surgery has to do with autopilot...
However, if a hospital was full of robots doing surgery and only one person died per year, that would be awesome! Much better rate than the humans with 0.89% average surgical mortality. I don't understand how it could be random though. Unexpected or unplanned sure, but given a robot runs off of a set of instructions which are known and the input data is known, the result is deterministic. A lot like a line following algorithm following a line. If the line makes a path to an obstacle, the algorithm is going to to move toward the obstacle.

Statistically, we should ban new drivers and doctors, they are worse than experienced drivers or doctors. No reason to put our lives at risk just so someone can get better. Then in 20 years, we'll magically get a new set of experienced doctors/ drivers from... somewhere...
(seriously though, getting an IV from a new nurse makes me not want to ever go to the hospital again).
 
  • Like
Reactions: MXWing
The vehicle had been telling the driver to "Hold Steering Wheel" for 10 seconds prior to it following the wrong lane divider. It's ambiguous if this was because the driver had not been detected as holding the wheel for the timeout or if the car knew it was approaching a section that it would have trouble negotiating.
"Hold Steering Wheel" is just a dumb timer, and issue warning on certain time interval since last time a driver's hand was detected. This timer mechanism has absolutely nothing to do with how and where autopilot holds the line on the highway, and where the car should go without hitting obstables. Audio warning and kicking in of emergency brake of this imminent collision should have nothing to do with the "Hold Steering Wheel" timer either.
 
"Hold Steering Wheel" is just a dumb timer, and issue warning on certain time interval since last time a driver's hand was detected. This timer mechanism has absolutely nothing to do with how and where autopilot holds the line, and where it should go. Audio warning and kicking in of emergency brake of this imminent collision should have nothing to do with the "Hold Steering Wheel" timer either.

Incorrect. The prompts occur at different intervals based on operating conditions - much more often when the car can see construction barrels, for instance.

I've also seen them start just as it gets near a tricky spot at a rate much higher than random chance should produce.

The car also seems to change how much torque it needs to feel to decide you're taking over based on how confident it is - in miserable weather, it gives you back control with very little wheel torque.
 
A licensing authority is is needed for autonomous behavior. There should be a test.

Not that I disagree, but I am reminded of the mouse that suggested putting a bell on the cat, so they would know where it was.
Normal driving tests are easy to game (known routes, limited area in general). They would need a method of creating multiple instances of edge cases and feeding them into the system. Or a pseudo city, submit your car, it will be judged (and impounded so you can't record and over fit the NN).
 
It is becoming clear that you need to protect the line.
F002DFFE-5CD5-4EC9-A9CA-706B866A4A51.jpeg
 
Haven't seen anybody toss this out yet. Back of the envelope numbers on having/using AP versus not:

cars built with AP hardware = 225k vehicles
U.S. avg miles per year - 11k miles
AP hardware equipped fleet miles per year using U.S. average mileage rate - 2.5B miles
U.S. vehicle fatality rate is 1 per 86M miles
Tesla AP HW equipped fatality rate is 1 per 320M miles

Fatalities in 2.5B miles using U.S. fatality rate - 28
Fatalities in 2.5B miles using APHW vehicle rate - 8

So that's maybe 20 lives per year.

It's hard to know how many of those 20 lives are saved because people are using AP versus not using AP because demographics and vehicle crash safety play a role as well. Tesla is certainly implying that some of the ones saved are due to AP usage. An NHTSA study of Tesla's internal data after the Josh Brown accident showed a 40% reduction in air bag deployments on cars that had AP autosteer ability compared to those that did not have it.

(see https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF page 13)

Since the 320M/86M figure represents a 73% reduction in fatalities the 40% reduction in serious accidents that the NHTSA calculated suggests that half or more of the reduction in fatalities in Tesla’s fatality figure is due to the availability of autosteer.

So that’s 10 plus lives per year being saved by autosteer.

This is a rough estimate but it illustrates why, from a fleet perspective, autopilot is still a win. And it lends support to the argument Tesla made in it's announcement that discouraging the use of autopilot is going to cost a lot more lives than it saves.

(Euro countries have lower fatality rates but lower mileage. Asian countries have higher rates and also generally lower mileage. The U.S. is by far Tesla’s largest market and has the highest annual driving distance so I just used U.S. numbers as a proxy for worldwide)

(edited out a typo and add the following)

It's worth noting that there are 50 times more moderate to serious injuries than fatalaties. We talk about fatalities a lot but the scale of the carnage is actually much greater than fatality numbers alone convey.
 
Last edited:
Would you take the same stance if there was a robot operating on patients in hospitals and killed one random patient every year? The argument would be that when it works it will be much better than human doctors and anyway a lot of patients die around the world due to human doctor negligence. So, theoretically it is ok to have a robot kill a few humans due to bugs because it’s trying to solve a hard problem. How would you distinguish between negligence caused by the robot manufacturer and a genuine corner case? Remember that autopilot software you are using is not even going through FCC, DMV or any independent testing. Everyone involved has an incentive to continue to push updates and claim more functionality than it can deliver.
If the robotic surgeon had a licensed surgeon standing by to intervene in the event the robot was going to make a mistake.

Not directed at you, but to those who think the car should be responsible for navigating every possible scenario with AP engaged. Why are so many people so quick to relinquish their responsibility for driving their car? If some people are really incapable of driving a car with autopilot enabled and still paying attention, then by all means you should stop using it. I understand that road hypnosis is real and using autopilot may even contribute to it for some people, but it's your responsibility as the driver to manage your state of awareness and the steering prompts are designed to help with that. I've zoned out when driving before and my response was to develop habits that focus my intention. If you want to be a passenger in your car, find somebody else to drive. Autopilot is a driver assistance package, not driver replacement and not passenger assistance.
 
Today on the way home, I repeated couple of scenarios. In both cases I am on the right most lane travelling west (i..e slow lane), there is grass but no concrete shoulder

1. On long stretch of solid lane marker on the rhs, AP2 on, auto lane change never crosses the lane.

2. Here, every exit has a gap of solid markings but after the gaps, there is a V gore area with cross markings and the exit post is at the end.

So, right after the highway gap, when the car nose just passed the tip of the V gore area. I flipped the right turn signal, the car will turn into the gore area. It seems to me the AP decided to follow the right most curved lane of the V area as it's safe to do so but ignored the cross markings the the left most V lane markers.

I tried these cases on several exits and is repeatable, not sure it is the same scenario with the accident but I will definitely pay attention when I use auto lane change.
 
  • Informative
Reactions: ddkilzer and 22522
Would you take the same stance if there was a robot operating on patients in hospitals and killed one random patient every year? The argument would be that when it works it will be much better than human doctors and anyway a lot of patients die around the world due to human doctor negligence. So, theoretically it is ok to have a robot kill a few humans due to bugs because it’s trying to solve a hard problem. How would you distinguish between negligence caused by the robot manufacturer and a genuine corner case? Remember that autopilot software you are using is not even going through FCC, DMV or any independent testing. Everyone involved has an incentive to continue to push updates and claim more functionality than it can deliver.

Bring on the robots. The patient signs over consent for the entire surgical process. There is risk no matter what the robot does or what the doctor does.

It is not "ok" for anyone to die. Everything has it's cost benefit analysis, everything has cost effective analysis. If it makes sense, you go forward. If it doesn't you don't. We are now talking multiple millions of AP driving miles.

1, 2, 10, or even 100 fatalities to this point is insignificant from fatalities caused by preventable human errors.


Are any of those posting who are critical of AP and think Tesla should not release it actually have it enabled on their car and regularly use it?

No. 3/4 of the posts and responses in this thread are originated by people who don't own Tesla's with AP2 experience. Yet are all subject matter experts.

Those interested in being nannies trying to regulate AP usage should be forcing breathalyzers into every vehicle if they are serious about reducing death and accidents on the roads.
 
Incorrect. The prompts occur at different intervals based on operating conditions - much more often when the car can see construction barrels, for instance.

I've also seen them start just as it gets near a tricky spot at a rate much higher than random chance should produce.

The car also seems to change how much torque it needs to feel to decide you're taking over based on how confident it is - in miserable weather, it gives you back control with very little wheel torque.

Agree about conditions. I've noticed that if the car that AP is following moves into the adjoining lane, so AP is presented with open road ahead, it very often produces a nag right then. I've experimented shaking the wheel when I see the car ahead begin to move over but I still get the nag even though I was torquing the wheel a few seconds before.

EDIT: I should add that my car is AP1

Very interesting observation on variable torque to disengage.
 
Agree about conditions. I've noticed that if the car that AP is following moves into the adjoining lane, so AP is presented with open road ahead, it very often produces a nag right then. I've experimented shaking the wheel when I see the car ahead begin to move over but I still get the nag even though I was torquing the wheel a few seconds before.

EDIT: I should add that my car is AP1

Very interesting observation on variable torque to disengage.

Mine is AP1, too. I've certainly noticed getting a prompt as soon as the car in front moves - I'd assumed that this was just because I was past the ~2 minute timer for no car and not yet to the ~3 minutes for following a car when the car moved, but it might be a mandated event instead.