Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
Huh?
A newborn baby has all its hardware, but not very functional.
A new PC with a fresh hard drive has all its hardware, but doesn't do anything but POST.

Processors are only as useful as the code running on them. If the AP2 HW needs more power, Tesla will upgrade the G/CPUs.

The AP software is a vehicle follower and a line follower. In the vehicle does something wrong, the only check is the lines. If the lines are also wrong there is no fallback. That is the way it works. People can process more (when paying attention) that's why AP is currently only an assist function.

Maybe the best simple analogy I've heard since @lunitiks and the cyclops cat
 
  • Funny
Reactions: lunitiks
"Tesla sped up before hitting road barrier"
"A navigation mistake by Autopilot..."
"the Tesla began a left steering movement."

The NTSB report and the coverage following it is really just awful investigating and awful reporting.

Because there is no mention at all of why these things happened.

Poorly maintained and missing lines caused the car to follow a line that led away from the lane.
All traffic aware cruise controls speed up to desired speed when car in front no longer is in the way.
There was no "mistake by Autopilot", there was only mistake by Driver.

I mean, NTSB, at least try to inform the public as to what is going on, and why these things are happening. There are reasons.

Ridiculous.

But regardless.... DRIVER NOT PAYING ATTENTION... should be the only headline.

All of the statements in the report are factual. Your logic however, is poor.

The fact that the driver made a mistake, or that the roads were poorly maintained/designed, do not excuse the mistake that AP made. Whether or not the driver is paying attention, do not make the autopilot mistake any more or less acceptable. The reason is not very important here. All mistakes happen for a reason, sometimes good reasons, but they're still mistakes.

The driver died because he had not corrected AP's mistake. If the driver had corrected the AP mistake, he would have lived. But AP still did make the mistake, and the driver's actions do not change this. See the logic?

And I should note: AP is designed to use in real roads with real-world conditions, real weather, real cars and real drivers right? All of the excuses you came up with to explain AP's problem are only all too common in the real world, including distracted drivers. There is no road ahead (pun intended) for EAP/FSD whatever if Tesla do not eventually solve these problems. And they need to do so fast before more people get impatient or die.
 
Last edited:
"At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the
Tesla's speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected," the NTSB said.

Just a postulation on my part, from a Tesla model S driver who uses autopilot on roads in the
eastern United States.
I blame three parties, un-equally.
One, the transportation dept in California for not resetting
the energy absorbing crash attenuators after the cleanup of each and every accident before leaving the scene. And for not maintaining Paint as it becomes ever more important now for assisting drivers. No one should collide with an impacted barrier. No paint should aim any vehicle toward an obstacle. Paint should now be used for setting off alarms in the vehicle. Two, the driver for not giving full attention to the scene of the traffic around him. Autopilot requires immediate attention in certain cases where autopilot is not yet mature enough to make decisions on its own.
And three, to a lessor extent, Tesla for having what might be a slightly mistaken piece of code in its, no doubt complicated, autopilot decision process, within the firmware of the vehicle.
I believe these three entities all combined to make for the perfect storm for this accident.
Its my belief, and if this runs contrary to some piece of evidence not known to me, then so be it. Write me off as wrong, but i think this is what occurred:
Driving down a multilane highway that has a fork in the road coming up, there are first of
all some signs put above the roadway indicating the two possibilities. Following that the dotted
lines dividing lanes become dotted with shorter dots and more of them, showing something is about to change, requiring a decision. Then some measured distance later these become a solid line, indicating one should not cross from either lane to the other. Drivers being what they are [and the decision sometimes requiring effort, data, time, or lookup on electronics,] there will be a fraction of drivers still making up their minds when the paint turns solid. There will even be drivers intent upon finishing the overtake of another vehicle beside them, knowing which way to go, but going there a bit late. These drivers cross the solid white line and get themselves onto the correct fork, in time to miss the paint - and the barrier- before the road actually forks.
This may be what the driver 'in front of the Tesla vehicle" may have been doing. He may have crossed the chevron paint to achieve his goals.
Both vehicles in this scenario now driving on solid diverging, lines with crosshatched lines (chevrons) between these lines, indicating 'do not drive here' As the lead car finishes his lane change onto black roadway he misses the final barrier and is on his way down his chosen fork in the road, event free.
The Tesla X, however has moved left and is now driving between two solid lines, actually wide enough to be considered a lane, albeit with lots of other paint from the cross hatch. Autopilot is seeing the edges of the paint as two solid lines, and is following them. Blue indicators on the dash, plus a dash image of a blue vehicle in front. Suddenly, with no lead vehicle present, the Tesla X would increase its speed, higher than 68mph, perhaps as high as 71 mph or more. It would increase its speed due solely to an autopilot setting, plus the new condition of 'no lead vehicle' at a lead distance of [reportedly] ONE. The two slightly diverging solid paint lines would appear parallel. The Tesla is now driving over cross hatches indicating 'do not drive here' to humans, but not to radar, and finally a crushed barrier. Stationary and immovable in the lane 100 yards (and three seconds) ahead ... With only a small number of yards remaining, the Tesla is accelerating, between paint lines, toward a crushed barrier, and hence the driver error. Perhaps looking down, or aside, or busy, or ill, or sneezing, or whatever, has not seen the barrier replace the vehicle ahead, has not made his decision for which fork to take, overlooks the crosshatched paint as 'ending soon, but not yet', or is blinded to the road by sun, but in any event with no effort on his part, his vehicle accelerates into said barrier.
The driver is not slowed down gradually enough to escape with injuries. Instead he ultimately crashes three seconds later on autopilot while accelerating. With no time for a radar warning beep and no time to apply automatic brakes, there is no precrash braking. Leading a driver who was ill or less than fully aware dead. The programming not recognizing two non-parallel lines as parallel, the paint leading the vehicle onward as paint does for autopilot..The cross-hatching not setting off any alarm tone that indicates 'pay attention, there is something amiss.' Tesla may correct this with a firmware update, but they should not be held responsible for an accident while there are other reasons present. Worst of all is the Dept of Transportation's crash barrier not reset [from being recently used,] therefore non existant, and you have the resulting crash and instant death of the driver as a result of three parties all taking a role.
Just my personal theory of what might have occurred that day, [and eleven days previously,] on a multilane Mt. View freeway. A properly signed highway perhaps two miles previously but with poorly painted roadway afterwards will cause many accidents as further technology uses paint for making important driving decisions.
Thousands of cars each day, each hour perhaps, jockeying across these very lanes making decisions at all times, risking life and limb on California freeways, as Guy Clark says in his song, "If I can just get off of this [L.A.] freeway without getting killed or caught". . . Well this Tesla driver did not heed Guy Clark, nor did he move to the country but chose to drive yet one more time down this Mountain View Freeway.... Albeit quite a way from L.A.
jm2cents, DaveyJane, a Tesla S driver in Pennsylvania.
 
The Tesla is now driving over cross hatches indicating 'do not drive here' to humans, but not to radar, and finally a crushed barrier.
Like @Ugliest1 said, there are no warning chevrons.

With no time for a radar warning beep and no time to apply automatic brakes, there is no precrash braking.

There was time/ distance for a warning, however the case of a narrow barrier on a freeway also exists paired with a curved off ramp. The car seems to lack the ability to tell if the road will diverge from the barrier before impact. My local freeway offramps have you pass within single digit number of feet of the barrier as you exit (to say nothing of construction zones). This is the balance between false positive and false negative AEB that Elon spoke of and is a large section of the cases why AP is only an assist tool.

That said, FCW warning (it does not brake) could be cranked up, however too many false warnings would lead to people disabling it resulting in less net safety...

How does your senario change if the X is replaced by a car with cruise control and possibly some level of lane assist?
 
All of the statements in the report are factual. Your logic however, is poor.

The fact that the driver made a mistake, or that the roads were poorly maintained/designed, do not excuse the mistake that AP made. Whether or not the driver is paying attention, do not make the autopilot mistake any more or less acceptable. The reason is not very important here. All mistakes happen for a reason, sometimes good reasons, but they're still mistakes.

The driver died because he had not corrected AP's mistake. If the driver had corrected the AP mistake, he would have lived. But AP still did make the mistake, and the driver's actions do not change this. See the logic?

And I should note: AP is designed to use in real roads with real-world conditions, real weather, real cars and real drivers right? All of the excuses you came up with to explain AP's problem are only all too common in the real world, including distracted drivers. There is no road ahead (pun intended) for EAP/FSD whatever if Tesla do not eventually solve these problems. And they need to do so fast before more people get impatient or die.

> do not excuse the mistake that AP made.

But it wasn't a mistake. The current AP follows lines. The road was horrible so the line went away and it found a different well painted one. It is just doing what it does.

The wheel steers, the tires roll, the motor powers, and the AP follows lines.

It's just all a machine.

And the driver drives.

If the machine crashes into something, it is the driver driving part that made the mistake.
 
I am not surprised that the secondary media started to jump to a bunch of speculative conclusions

Exactly.

That is why a report like this has to be very specific. You just can't say "the car sped up into the wall" It makes it seems horrible when really it is a mundane answer that all traffic aware cruise control systems do. You can't just say "the car turned left" when really it is just the car following what line it had because the real one was horribly maintained and gone.

Just simply a very misleading report because they left out the truly important parts. The normal reasons why these mechanic things where happening.
 
Exactly.

That is why a report like this has to be very specific. You just can't say "the car sped up into the wall" It makes it seems horrible when really it is a mundane answer that all traffic aware cruise control systems do. You can't just say "the car turned left" when really it is just the car following what line it had because the real one was horribly maintained and gone.

Just simply a very misleading report because they left out the truly important parts. The normal reasons why these mechanic things where happening.

I PMed this, but it seems too appropriate now...
From Short Circuit:

Ben Jabituya: "Unable. Malfunction."

Howard Marner: How can it refuse to turn itself off?

Skroeder: Maybe it's pissed off.

Newton Crosby: It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes.

Newton Crosby, Ben Jabituya: [in unison] It just runs programs.

Howard Marner: It usually runs programs.
 
I wonder if he froze up. I have a hard time understanding him not looking up when the car accelerated or when it made the “left steering input”. He hadn’t owned the car long enough to be THAT complacent with AP I would think.

By the time the car started accelerating he was only 4 seconds from the barrier. The sun was also in his eyes, which may have been a factor. Maybe he saw the barrier too late to not hit it, but froze up instead of acting. Some people do freeze in situations like that, and you don’t always know if you are one of them until you are in that situation.
 
All of the statements in the report are factual. Your logic however, is poor.

The fact that the driver made a mistake, or that the roads were poorly maintained/designed, do not excuse the mistake that AP made. Whether or not the driver is paying attention, do not make the autopilot mistake any more or less acceptable. The reason is not very important here. All mistakes happen for a reason, sometimes good reasons, but they're still mistakes.

The driver died because he had not corrected AP's mistake. If the driver had corrected the AP mistake, he would have lived. But AP still did make the mistake, and the driver's actions do not change this. See the logic?

And I should note: AP is designed to use in real roads with real-world conditions, real weather, real cars and real drivers right? All of the excuses you came up with to explain AP's problem are only all too common in the real world, including distracted drivers. There is no road ahead (pun intended) for EAP/FSD whatever if Tesla do not eventually solve these problems. And they need to do so fast before more people get impatient or die.

AP is not designed to work perfectly on ALL roads in ALL conditions. It actually has plenty of limitations and requires a human driver to manage those limitations. What really separates AP from adaptive cruise control is the autosteer and that only works effectively on clearly marked roads with little or no ambiguity.

It is definitely a mistake to presume that Autosteer would work in all real world scenarios. It's really not that robust. You could say that Autosteer made a fatal mistake, and it obviously did in this case, but the real underlying mistake was allowing Autopilot to drive through that point without paying attention to the road. It would have been easy for the driver to override AP if they were paying attention to what it was doing.
 
this makes me actually want to cancel my model x reservation. they should disable autopilot on all cars until they can find out the definitive cause

The definitive cause was the driver not paying attention to what AP was doing. If you are expecting AP to drive you around while you read a book, then yes you should most definitely cancel your reservation!

But if you consider AP to be nothing more than a semi-intelligent driver aid, then it's actually a great thing to have. If AP does make the wrong decision when faced with some confusing lines on the road, simply override it and continue safely along your route.

Bottom line for me is this:- Talking hypothetically, I would be happy to swap seats with the driver who crashed 30 seconds before the fatal crash (having no knowledge of the potential crash coming up) with AP engaged and take it from there. I would probably have switched AP off approaching the intersection anyway or at least over-ridden any steering input out of my lane.
 
I wonder if he froze up. I have a hard time understanding him not looking up when the car accelerated or when it made the “left steering input”. He hadn’t owned the car long enough to be THAT complacent with AP I would think.

By the time the car started accelerating he was only 4 seconds from the barrier. The sun was also in his eyes, which may have been a factor. Maybe he saw the barrier too late to not hit it, but froze up instead of acting. Some people do freeze in situations like that, and you don’t always know if you are one of them until you are in that situation.

Yes, people on this forum are all too quick to point fingers at the victims of these AP related crashes. I'm not trying to say that these drivers weren't in some ways culpable, or that they shouldn't take responsibility of managing AP correctly. But in the real world, the circumstances are rarely black and white. They may have had good reasons for their problems, like you wrote. AP can be very unforgiving when it makes it own mistakes and bad things happen as a result.

People here could learn some empathy for their fellow drivers to understand that to err is human. Have the most vocal critics of drivers never been in an accident themselves? A system that is "user-friendly," especially an assistive system that requires human interaction, need to be designed for the average driver, with all their flaws in mind. A system that invites user error or is unforgiving when it makes its own mistake (not handling the gore point correctly) is a deficient design. Distracted drivers are not just a problem for those drivers, it's a problem for Tesla as well! People should realize that and try not to find excuses to explain away every flaw in Tesla cars.
 
Yes, people on this forum are all too quick to point fingers at the victims of these AP related crashes.

But in pretty much all cases so far, there is nothing else to point to.

They are responsible for driving.

When they are not responsibly driving (paying attention), they are responsible for the outcome.

Anything beyond this is just pointing fingers at the machine for the human error.

It's really simple. You are driving. Pay attention and drive, no matter what assistance tech you use.
 
But in pretty much all cases so far, there is nothing else to point to.

They are responsible for driving.

When they are not responsibly driving (paying attention), they are responsible for the outcome.

Anything beyond this is just pointing fingers at the machine for the human error.

It's really simple. You are driving. Pay attention and drive, no matter what assistance tech you use.

There is human error here, no doubt. But you're ignoring the machine error. People should rightfully be pointing fingers at the machine for the error too.

Don't just wave away the machine error by saying it's part of the design specs. If an obvious mistake is "designed," then it's a bad design. This is the difference between a bad product and a good product. Both might get the job done, but why would anyone buy the bad one?

You'll never be able to get your line of argument past the general population. Perception is more important than reality, and if this perception of the machine frequently makes these errors magnifies, then it doesn't really matter anymore that the human is at fault. If people come to believe that with AP it becomes easy for users to make mistakes, people will stop buying the product, regardless of general statistics that show AP to be safer than human driving.

So no, it's not that simple. You can't just blame the driver, even if it is the driver's mistake. Don't pretend that Tesla has no problem here.
 
Last edited:
  • Disagree
Reactions: Richard34212
Yes, people on this forum are all too quick to point fingers at the victims of these AP related crashes. I'm not trying to say that these drivers weren't in some ways culpable, or that they shouldn't take responsibility of managing AP correctly. But in the real world, the circumstances are rarely black and white. They may have had good reasons for their problems, like you wrote. AP can be very unforgiving when it makes it own mistakes and bad things happen as a result.

People here could learn some empathy for their fellow drivers to understand that to err is human. Have the most vocal critics of drivers never been in an accident themselves? A system that is "user-friendly," especially an assistive system that requires human interaction, need to be designed for the average driver, with all their flaws in mind. A system that invites user error or is unforgiving when it makes its own mistake (not handling the gore point correctly) is a deficient design. Distracted drivers are not just a problem for those drivers, it's a problem for Tesla as well! People should realize that and try not to find excuses to explain away every flaw in Tesla cars.
The issue is the driver is still the only one responsible. Accidents are tragic, especially those that result in injury or fatality. If your attention is on the road and you're being attentive to what autopilot is doing, you can override mistakes that autopilot makes. It doesn't fight you for control, although the longer you spend with your attention off the task at hand, the smaller window you have to correct.

At best, autopilot is guilty of not avoiding an accident that the primary responsible party also failed to avoid. If the driver was blinded by the sun or had his view obstructed by the vehicle in front, then odds are that APs cameras were obstructed as well. What would the outcome have been in the same situation without autopilot?

On the plus side, autopilot often detects and avoids situations that the driver didn't or couldn't see coming. Together, a driver and autopilot should be safer than either alone, but if the driver is going to take themselves out of the equation and rely solely upon autopilot, then they are inviting tragedy.

No autonomous or semi-autonomous system is ever going to be perfect, especially with other non-autonomous systems in the mix. Once level 5 autonomy is a thing, those vehicles will very likely still hit/kill people and get in accidents. What's important is that they do so at a rate less (hopefully significantly less) than human drivers.
 
There is human error here, no doubt. But you're ignoring the machine error. People should rightfully be pointing fingers at the machine for the error too.

Don't just wave away the machine error by saying it's part of the design specs. If an obvious mistake is "designed," then it's a bad design. This is the difference between a bad product and a good product. Both might get the job done, but why would anyone buy the bad one?

Don't confuse what you want AP to be with what AP currently is.
I have a truck that has a feature that will keep it at the same speed regardless of road condition or obstacles. Is that defective? Or it is designed to only handle control of speed? Is that limited scope a defect?

AP will try to avoid leaving the lane (better than my truck) and to not rear end cars ahead (better than my truck). That is what it is designed to do. It is not designed to be autonomous, but it's a darn sight better than my truck.
If you feel AP is defective, then you should also feel all cars with only cruise control are defective because they are designed with no regard at all for the ability to ram objects/ cars/ pedestrians...

Further, you should feel all non AEB cars are defective due to no design consideration for collision avoidance.

AP does not cover the entire use case environment, but it covers more than most. The fact is does not have 100% coverage does not mean it is worse than cars with less coverage...
 
  • Like
Reactions: vandacca
Don't confuse what you want AP to be with what AP currently is.
I have a truck that has a feature that will keep it at the same speed regardless of road condition or obstacles. Is that defective? Or it is designed to only handle control of speed? Is that limited scope a defect?

AP will try to avoid leaving the lane (better than my truck) and to not rear end cars ahead (better than my truck). That is what it is designed to do. It is not designed to be autonomous, but it's a darn sight better than my truck.
If you feel AP is defective, then you should also feel all cars with only cruise control are defective because they are designed with no regard at all for the ability to ram objects/ cars/ pedestrians...

Further, you should feel all non AEB cars are defective due to no design consideration for collision avoidance.

AP does not cover the entire use case environment, but it covers more than most. The fact is does not have 100% coverage does not mean it is worse than cars with less coverage...

The difference between AP and dumber technologies like cruise control is that cruise control is predictable. This is a very very important distinction. With AP, you are never quite sure if it can handle the next curve correctly, or next emergency vehicle, or start braking for no reason. This unpredictability makes it more difficult to use.

Also, I fear that AP might be falling into a "uncanny valley" of automation. (If you are not familiar with this graphics term, Google it.). It's gotten so smart, that counterintuitively, it becomes more difficult to use. Since AP makes mistakes at unpredictable times, but is pretty smart otherwise, drivers can get complacent as they can never really know when the next mistake will occur. This problem might get worse as AP gets better, because as the frequency of mistakes decrease, drivers will tend to be more falsely trusting of the system. Only when level 5 is achieved and no human intervention is necessary, will this human cog in the wheel weak link be removed.