Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
My 2012 S doesn’t have AP. So I’ve been a Tesla driver without AP for 5.5 years, watching the forum go to 50% AP discussion with interest, but occupationally ignorant.

Now have a 3 with AP. First significant use of AP yesterday, driving Santa Barbara to SF. Went in with expectation of AP doing most of the driving. Every time I tried that, it was scary as hell. Finally adapted to hands on wheel and using as assist, with following distance of 5. In that context, it is awesome fatigue reducer, and double great in stop and go highway driving. But you would have to be crazy or stupid to use hands free with following distance of 1. That was obvious from my first use of the technology. Plus I learned fast to override in construction zones, and be careful on bridges and around merges/ramps.

I don’t care if they call it AP, it’s assist. I’m used to being marketed to, so i take no offense at their hype.

Tesla is going to take a hit for its fringe drivers. Sadly.
 
But what do you think ** moments ** before the accident AP was turned on means ?!?
It implies that in the seconds before the collision, autopilot was active with a following distance of 1, but it doesn't necessarily mean that the driver activated autopilot in those moments.

Ultimately, it comes down to the difference between "autopilot was engaged" vs "the driver engaged autopilot". Based on the following comment about visual and audio attention prompts, I had assumed the prior translation, that autopilot was engaged at the time, not activated right before the crash.
 
But what do you think ** moments ** before the accident AP was turned on means ?!?

I think you are misinterpreting that statement. Tesla also talks about the Autopilot warnings the driver had received previously in the drive. So we know he had AP on earlier.

I interpret the statement to mean AP was engaged all the way up to the moments before the crash, not that it was turned on moments before the crash.
 
Tl;dr same as previous two posts.

But what do you think ** moments ** before the accident AP was turned on means ?!?

It can mean two things.
You are interpreting it to mean a change in state "Sulu engaged the warp drive in the nick of time."

It can also indicate the state of something. "At the time of her wedding, she was engaged to her future husband."

The time "momemts" before the crash are most critical. So the state of AP at that point is critical. AP can be engaged/on or disengaged/off.
Contrast "the driver engaged " an action
To
"AP was engaged" a state of being
She engaged in a conversation.
She was engaged in a conversation.
 
Driver turned it on ** moments ** before the accident
The Tesla statement could also be read such that in the moments before the accident autopilot was engaged. Meaning it could have been on for quite a while but they are just confirming that it was in fact on moments before the crash.

Beaten to it by three other comments.
 
It implies that in the seconds before the collision, autopilot was active with a following distance of 1, but it doesn't necessarily mean that the driver activated autopilot in those moments.

Ultimately, it comes down to the difference between "autopilot was engaged" vs "the driver engaged autopilot". Based on the following comment about visual and audio attention prompts, I had assumed the prior translation, that autopilot was engaged at the time, not activated right before the crash.

“a moment corresponds to 90 seconds.” Tesla said “moments.” So we are talking minutes before or several minutes before not seconds.

Moment (time) - Wikipedia

Am I over analyzing it?
 
But you would have to be crazy or stupid to use hands free
Agreed.

That said, I would hope that we are all good enough at reasoning to understand that "driver's hand not detected" is not to be confused with "driver's hands not on wheel". Anecdotally, I keep my hand on the wheel when I use AP, and I routinely get the "put hands on wheel" light warning because apparently how I hold the wheel near the bottom doesn't produce the right torque for the sensor.
 
Agreed.

That said, I would hope that we are all good enough at reasoning to understand that "driver's hand not detected" is not to be confused with "driver's hands not on wheel". Anecdotally, I keep my hand on the wheel when I use AP, and I routinely get the "put hands on wheel" light warning because apparently how I hold the wheel near the bottom doesn't produce the right torque for the sensor.
Yeah, I personally found that part of the statement to be a little misleading. "No steering input was detected from the driver within the six seconds prior to the crash" is more accurate than "hands not detected on the wheel", since the hand detection is tied directly to steering input. The smoother autopilot gets, the more frequently I get "hands on the wheel" reminders. It's not because my hands are elsewhere, it's because the car is going exactly where I expect it to and I'm not imparting any torque against it.

I have never received a red flashing warning or audible autopilot alert, because I immediately notice the white flash and react.
 
  • Like
Reactions: Cirrus MS100D
While true that there’s a greater emphasis on increased hand flying, your Air France example is innacurate. The Air France pilots had a systems failure giving erroneous indications(iced over pitot tubes leading to an unreliable airspeed)

No: the pitot tube malfunction was only temporary, and there were other airspeed indicators available. The pilots simply couldn't recover situational awareness after the autopilot malfunctioned, just like the driver of the X couldn't determine the degree of danger he was in when the X autopilot malfunctioned.

In both cases, the autopilot lulled humans in an unnatural state of zombieism. And in the Uber self driving fatality, we saw a third case, where the driver was completely out of it. And of course there is the original AP fatality and the Chinese AP fatality, where humans would never have let their cars smash into stationary objects if they were driving without AP. Five cases so far.

Think of the autopilot as a comfortable feather bed you lie down on, where the manual says, "when you lie in this bed, you just stay completely alert. No napping is allowed!" It's a trap. Autopilot is a nap-trap for the brain.

And it's because Autopilot = "Zombie Mode" that there was a 7x increase in fatalities per mile compared with Teslas that were not using Autopilot. Stay away from AP or risk turning into a zombie at a fatal juncture.
 
Last edited:
What the hell dude, you haven’t even seen the accident or know about the driver and you’re blaming the driver. There were no cars whatsoever in front him, he was going against the high traffic 101 north and no one was in the HOV lane. So it don’t matter what following-distance you use. You yourself haven’t even used it and are commenting on the other person’s ability and judgement. Also what BS, if they offer that setting, Tesla shouldn’t complain about it. The whole blog was so stupid! They are giving hints to blame the driver and still say they’re sad. Tesla is BSing big time!
Reread my post. I never said anything about driver or the specifics of accident. Only that it is stupid or reckless to use following distance 1 at speed or go hands off.
 
Last edited:
  • Disagree
  • Like
Reactions: EVie'sDad and dm33
First, a couple stories about two times I've repeatedly tried to reproduce (less serious) Autopilot behavior.

#1. When we first got my wife's Model X (AP1) in late March 2016, we nearly got in an accident in the first month assuming that the vehicle would notice a stopped car (at a stoplight) ahead of us in city traffic, specifically on a 45 MPH "expressway" with the car maybe 10-20 car lengths in front of us around a bend in the road. Boy, were we wrong. Fortunately, I hit the brakes hard enough to prevent an accident. From then on, after most software updates, I would re-test this scenario (sometimes with my wife in the passenger seat, usually to her visible discomfort) to see if this scenario had improved. After getting my own Model X (AP2) in February 2017, I continued to test this scenario. I did not recall reporting this specific scenario to my Tesla Service Center (except maybe in passing), but I never asked them to investigate it. I assumed it was a limitation of the way Autopilot worked at that point in the beta. (BTW, as of 10.4, Autopilot on MX AP2 actually does recognize a stopped vehicle at expressway speeds, although it typically recognizes it "later" than I'm comfortable with and the MX brakes moderately hard when stopping, so I don't rely on it to "always" stop in time. I'll either disengage Autopilot or simply start reducing the TACC speed to "hint" that it should start slowing down sooner.)

#2. About 5-6 months ago, I noticed that Autopilot on my MX AP2 would recognize shoulders as lanes (both left and right shoulders in certain sections of specific highways), and that initiating a lane change would actually start the car moving into that "shoulder lane". This concerned me greatly because these "shoulder lanes" frequently narrow, especially when leading up to a bridge abutment. I was so concerned that I reported this issue to Tesla's NA Service email address (and my home service center) a few times, including taking a video with my iPhone showing the lane being detected during the route, the location and the time of day when this happened. (I never tried to change lanes while recording video.) Again, after most software updates, I would re-test the "shoulder lane" detection issue in the places I knew where I previously could reproduce it to see if it had been fixed. Fortunately, testing simply meant driving by that section of road to see if a "shoulder lane" was detected by glancing at the driver's console, nothing more. (Note: I didn't realize others had noticed this specific issue until I read this thread; I just don't have time to keep up with so many threads on the forums.)

I think you should document such issues, and save dashboard video recording if you have one,
and send your concerns to Tesla
. I wonder if there is a special webpage on the Tesla Website customer support to do this.

I am also awarded myself of some dangerous areas (in general, just by looking at the skids marks on the road.)
I wonder if there are any special Webpage allowing reporting such dangerous areas
to other drivers and roads services like Caltrans.


It would have been certainly be useful if the driver of the Mountain View fatal accident had reported his concerns, this way.
So this could have help understanding what he was worry about,
and this could have been tested and might or might not be reproduced
depending if only his car had a problem or if there was a flaw within the AP.

Since cars must have a mandatory rear view camera, it would not be too difficult to have also a front camera and have both
of them recording internally, may be the last 10 minutes in autoloop (may be only 10 minutes for privacy purpose.)
So accidents could be later analysed and reproduced when needed.

I imagine that in the future when cars will be connected to each other, accidents locations could be stored on a map,
a little bit like when using Waze, so any AP could be alerted in advance of any dangerous areas.
 
The brand/design of the car is relevant here for three reasons:

(i) The battery-sled design confused firefighters (who weren't sure how to put out the fire) and there was a need for firefighters to remain on the scene for an extended period of time to monitor the battery for flare ups before it could be safety moved; and

(ii) It is very possible that AP (which is marketed by Tesla as a unique system and, even if not unique, is a type of autosteer system-- autosteer being a relatively new technology that is not that common) was involved in the crash; and

(iii) By Tesla's own admission, systems of the car were still "in Beta", a status that (a) is very unusual for a passenger car, (b)
makes the car different from virtually all other cars on the road; and (c) essentially admits that the car deserves more intense scrutiny than cars that use proven technology.

If Tesla doesn't want its cars to be singled out for scrutiny, maybe it shouldn't spend so much time trying to convince the world that its cars are uniquely advanced and are developed and engineered in a novel manner. Since it does deviate from the norm, it has to expect more scrutiny.

This isn't unique to Tesla.

If a Toyota Corolla gets into a relatively rare type of serious crash, it's hardly important that the car involved was a Toyota Corolla. On the other hand, if an experimental Toyota model gets into such a crash, the type of car is newsworthy. That's just the way it is. Plus, public apprehension about advanced vehicles is high as a result of the Uber incident.
General public fears of changes also plays large role here!
 
  • Like
Reactions: Robwoodruff
That instance was not caused by centering in a lane for the sake of being in the center if a lane.

The Google fender bender was based on the car being programmed to act human and use the curb edge of a wide single lane to make a right turn without blocking through traffic. The car was going to make a right turn, then had to move back into the center of the lane to avoid the sand bags. It assumed (wrongly) that bus would yield.

Thank you for the information.

The way I read it, and the way many road cyclists would read it, is:

1) "The google car failed to control the lane."
2) "Confusion ensued."
3) "Wreck occurred."

If cars sometimes park there, and sandbags sometimes lay there, it is not a "wide lane." It is a shoulder.

The google car did something a poorly trained cyclist would do.

Lane presence is determined by vehicle existence 3 to 4 feet off the center
lane marker. That is where everyone is looking to guide their vehicle. If one wants to avoid accidents, one's vehicle better occupy some of that window. This holds for bicycles, motorcycles, and autonomous cars.

Wide lanes don't exist. If it looks like a wide lane, be wary, it is actually something else.

Vehicles should not center, they should establish position near the guiding lane marker.

Thank you again for the information. [The google car behaved in a way that invited negotiation.]​
 
  • Like
Reactions: Kant.Ing
No: the pitot tube malfunction was only temporary, and there were other airspeed indicators available. The pilots simply couldn't determine the proper state of the aircraft after the autopilot malfunctioned, just like the driver of the X couldn't determine the degree of danger he was in when the X autopilot malfunctioned.

In both cases, the autopilot lulled humans in an unnatural state of zombieism.

Think of the autopilot as a comfortable feather bed you lie down on, where the manual says, "when you lie in this bed, but stay completely alert. No napping is allowed!". It's a trap. Autopilot is a nap-trap for the brain.
Driving is a nap-trap for the brain. The vast majority of accidents are cause by drivers who aren't paying attention to the road. To use completely made up numbers, even if autopilot resulted in 10% more driver distraction, but prevented an accident from that distraction by 50%, it would be safer than no autopilot. Data so far, seems to indicate that autopilot increases safety.

I pay attention to the road while I'm driving and I take precautions to alleviate driver fatigue on long trips. It's nice knowing that with autopilot engaged, that if I sneeze or look over my shoulder to check for a clear lane at exactly the wrong time in traffic, that it might avoid an accident for me.
 
I believe what we have here is a conflux of perfect storm conditons contributing to the accident;

Poor lane markings
Inadequate warnings about gore zones
Signage not removed after construction last Winter
Crash cushion not reset
High rate of speed
Time of day with possible glare on camera
Radar limitations in detecting non moving objects
Driver inexperience with both vehicle and route
Driver inattention

Any other contributing factors?
 
  • Like
Reactions: krouebi
Wow! right., whatever it is, just support Tesla and speculate that the driver was obviously at fault. Tesla never said don’t use at higher speeds!
The driver was at fault.There's lots of other elements that should have mitigated and/or prevented the situation, but ultimately, Auto Pilot did not wrestle control from the driver and drive into the barrier. The human, for now, is in charge, that's why there's a steering wheel and brake. It's no different how insurance companies treat a rear-end collision. The second (and subsequent drivers) are always at fault if they hit the car in front, even if the car in front chose to brake hard to avoid a tennis ball, or something else entirely non-dangerous.
 
Yeah, I personally found that part of the statement to be a little misleading. "No steering input was detected from the driver within the six seconds prior to the crash" is more accurate than "hands not detected on the wheel", since the hand detection is tied directly to steering input.

Hands detected also covers any torque less than the amount needed to disengage AP. Likely the variable that gets recorded "dectected_driver_torque" and/or "time_since_last_torque_detected"

My interpretation:
and the driver’s hands were not detected on the wheel for six seconds prior to the collision.
It is a fact centric (and polite) version of "driver was not recorded taking any evasive action"
It also negates the "overriding AP takes too much torque/ causes swerving" scenarios.