Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
It doesn't work that way. AP isn't like a product that's developed and tested. It's a machine that learns. There's no way to replicate the real-world experience that AP is gaining. That's why Uber and other autonomous programs are out on the streets even with the risks as Uber discovered in AZ.

Yup... But Uber and the other companies are testing and refining their autonomous vehicles using (supposedly) trained safety drivers who are specifically trying to test features of the autonomous system. That is also largely how other manufacturers seem to have tested/developed their L2/L3 systems. Tesla seems to be unique (or pretty darn close to unique) in terms of releasing to owners a system at an early stage of development with so little guidance on how it should be used and so few use restrictions. This seems to reflect (i) a release schedule that was accelerated because Tesla sold features before they were actually ready and then was pressured by owners to actually finally release something, (ii) a willingness of Tesla to let its owners figure out how the system works through experimentation (rather than restrictions/instructions), (iii) Tesla's notion that it can always just fix/improve things OTA, (iv) Tesla's relative willingness to use its owners as more-or-less-willing guinea pigs for testing, and (v) Tesla's general belief that existing industry practices (in manufacturing, supply chain, testing, development, sales, and pretty much everything else) are inefficient, expensive and time consuming and should be redesigned from the ground up by taking what other manufacturers would consider to be shortcuts.

As the recent articles about Tesla's problems with improperly screened Chinese suppliers and designing a factory with too many robots is showing, taking shortcuts to save money or quicken processes often backfires. It seems to me that this is also occurring when Tesla seems to be doing less in-house, pre-release testing of features than would traditionally have been done.
 
...No one forced Tesla to start selling AP 2 to customers before it was actually ready...

It may be counter-intuitive but usually, if companies want to release a beta, they have to give some kind incentives for beta takers.

It's the reverse in Tesla. The demand is so great that if owners want to be included in the beta pool, they have to pay a big sum of money.

Another example is

comma.ai

It's a pure beta company from the start.

People can die by using unproven and unfinished products but that does not prevent people from buying!
 
People paid good money for Autopilot in 10/2016 and they got NOTHING at that time!

That's because it has been a beta and it has still been.

There is no surprise in that fact.

How can anyone pay for an unfinished product and be shocked that it's not finished?

Beta is not for everyone. It is for early adopters who are problem solvers and not for those who ignore instructions.

And in my view, this unique practice of selling features before they have been released and then releasing the features in "beta" is a huge problem. Tesla can't adopt a practice that no one else has, and then use that practice as a justification for releasing a product in an unsafe form.
 
It's human driver who has a driver license and does not act like one is the problem.
and like so many others, including Tesla, you know what this guy was doing right before the accident?
Critics here talk as if the guy was playing video games for the 5 seconds before his crash. What if the guy spent
those seconds checking the side and rear view mirrors? He would have been paying full attention like any other
responsible driver and had the same result. You can't check your mirrors or blind spot without taking your eyes
off the road.
 
I've read this several times, and am having a lot of trouble understanding what you are saying, especially with respect to "the honor of machine/hardware/software is well protected" . Could you clarify?

As in Air France 447 crash, the final findings as mentioned in this article:

Air France 447 Crash: Final Report Points to Pilot Error, Confusion

does not mention an easy fact of why the crash happened in the first place:

Everyone was happy flying until the pitot tubes malfunctioned!

If the pitot tubes didn't malfunction, everyone in that plane would be still be alive today.

But no!!!! They don't blame on the hardware malfunction!

They even seemed to make the machine as heroes for repeatedly warning the pilots.

They blamed on human pilots. They described them as incompetent.

That's quite harsh but it's very simple: a driver needs to know how to drive and a pilot needs to know how to pilot.

Blaming hardware/software has not been working out in aviation autopilot at all!
 
Last edited:
  • Helpful
Reactions: immunogold
It may be counter-intuitive but usually, if companies want to release a beta, they have to give some kind incentives for beta takers.

It's the reverse in Tesla. The demand is so great that if owners want to be included in the beta pool, they have to pay a big sum of money.

Tesla's whole notion of "beta testing" with customers (whether incentivized or "volunteering") is unique in the auto industry. This sort of testing is common in software/websites; not in products like cars that can kill people. Manufacturers of cars traditionally do their testing using professional test drivers/road testers who follow a testing protocol and take record lots of information. This happens before members of the public start receiving the vehicle.
 
  • Love
Reactions: NerdUno
People paid good money for Autopilot in 10/2016 and they got NOTHING at that time!

That's because it has been a beta and it has still been.

There is no surprise in that fact.

How can anyone pay for an unfinished product and be shocked that it's not finished?

Beta is not for everyone. It is for early adopters who are problem solvers and not for those who ignore instructions.

Autopilot safety has been incremental from NOTHING to something today.

For those who are inattentive, I think they still die whether they have Autopilot or not.

For those who are willing to follow instructions during beta period, I think Autopilot makes driving much safer.

As for a whole pool of all kinds of drivers: good, average, wreckless..., those who use Autopilot and those who don't but they did buy Autopilot, Tesla's statistic says it's 3.7 times safer than those without Autopilot hardware.

Subjectively, it may be debatable but it is hard to beat the statistics!
Understood, the evolution of autopilot needs real world experience. While it does improve traffic accident but it also created a false positive safety issue because of the driver "misunderstand" when and where the autopilot is in trouble.
 
Eh? I can’t speak to Honda or Ford but I am very familiar with GM and Toyota LKA and they have little in common with AP.
AP is applying torque to the wheel nearly all the time to keep the vehicle centered in the lane.
LKA only activates if you stray too close to the edge of the lane.

This is no longer true for many new active systems with active steering.
For example, for BMW and Mercedes, they have active lane keep assistance with automatic steering for semi-autonomous driving; in additional to the lane departure warning and correction if you stray over the edge of the lane. BMW also has automatic lane change when you move the turn signal stalk in the direction you want.

Maybe Tesla system is a bit better, but given its current state of limitations, other manufacturers are catching up.
And since none of the systems including Tesla can be trusted, and they all need drivers' vigilant attention, AP, at this point, does not have much advantage over other systems' steering assistant. But it costs a lot more.
 
  • Like
Reactions: NerdUno
As in Air France 447 crash, the final findings as mentioned in this article:

Air France 447 Crash: Final Report Points to Pilot Error, Confusion

does not mention why the crash happened in the first place.

Everyone was happy flying until the pitot tubes malfunctioned!

If the pitot tubes didn't malfunction, everyone in that plane would be still be alive today.

But no!!!! They don't blame on the hardware malfunction!

They even seemed to make the machine as heroes for repeatedly warning the pilots.

They blamed on human pilots. They described them as incompetent.

That's quite harsh but it's very simple: a driver needs to know how to drive and a pilot needs to know how to pilot.

Blaming hardware/software has not been working out in aviation autopilot at all!

I think you should read the actual report by the French version of NTSB (it's at
https://www.bea.aero/docspa/2009/f-cp090601.en/pdf/f-cp090601.en.pdf
and is, fortuneately, translated into English). The conclusions begin at page 199.

Significantly, it states:

"The aeroplane went into a sustained stall, signalled by the stall warning and strong buffet. Despite these persistent symptoms, the crew never understood that they were stalling and consequently never applied a recovery manoeuvre. The combination of the ergonomics of the warning design, the conditions in which airline pilots are trained and exposed to stalls during their professional training and the process of recurrent training does not generate the expected behaviour in any acceptable reliable way. In its current form, recognizing the stall warning, even associated with buffet, supposes that the crew accords a minimum level of “legitimacy” to it. This then supposes sufficient previous experience of stalls, a minimum of cognitive availability and understanding of the situation, knowledge of the aeroplane (and its protection modes) and its flight physics. An examination of the current training for airline pilots does not, in general, provide convincing indications of the building and maintenance of the associated skills. More generally, the double failure of the planned procedural responses shows the limits of the current safety model. When crew action is expected, it is always supposed that they will be capable of initial control of the flight path and of a rapid diagnosis that will allow them to identify the correct entry in the dictionary of procedures. A crew can be faced with an unexpected situation leading to a momentary but profound loss of comprehension. If, in this case, the supposed capacity for initial mastery and then diagnosis is lost, the safety model is then in “common failure mode”. During this event, the initial inability to master the flight path also made it impossible to understand the situation and to access the planned solution."

So it doesn't just say "pilot's fault." It also recommends a whole lot of changes to training and warning systems.

Tesla's training for AP, by your admission, amounts to "pay attention at all time." That's a far cry from what is discussed here.

Also, once again, pilots have lots of training; drivers much less. Therefore, for a auto to be safe it must be much more foolproof.

But more importantly, safety investigations aren't about assigning "fault" to one party. They are about finding all of the factors that contributed to an accident, and recommending mitigations/safety fixes.

In Mountain View it is almost certain the AP, driver inattention, roadway design, and condition of the safety barrier (along, possibly, with other factors) all contributed to the crash.
 
  • Like
Reactions: NerdUno and Ben W
and like so many others, including Tesla, you know what this guy was doing right before the accident?
Critics here talk as if the guy was playing video games for the 5 seconds before his crash. What if the guy spent
those seconds checking the side and rear view mirrors? He would have been paying full attention like any other
responsible driver and had the same result. You can't check your mirrors or blind spot without taking your eyes
off the road.

I can tell if I'm out of my lane while looking in the rear view mirror. It's pretty obvious.

I think we can safely conclude that eyes were not on the road. People are now dying in greater numbers due to texting than drunk driving.
 
I can tell if I'm out of my lane while looking in the rear view mirror. It's pretty obvious.

I think we can safely conclude that eyes were not on the road. People are now dying in greater numbers due to texting than drunk driving.

When it is rush hour traffic and you discover that you are in a lane, all by yourself, it is probably not a lane.
 
I know we've gone over what could have happened so many times already but tonight I was looking at some of the images captured and recovered by @wk057 on other accidents Teslas were involved. A lot of detail can be seen of the roadway ahead. I feel like that gave me an even better idea of how much of the scene would have been visible to him. After watching these I can only assume that Tesla recovered at least 5 seconds of recorded video/images combined with vehicle data to make the statements they did about the unobstructed view of the barrier. We know there were likely other factors that contributed to the accident but ultimately the car is under the driver's control and up to him to take evasive manuevers if needed. What I think it comes down to is that the AP system would not have prevented him from turning the wheel or making other moves such as braking. He is still the master of the vehicle. He must have taken corrective action in the past since he mentioned several problems in this spot and didn't crash previously.

I do feel for the wife and family especially to have it all over the press but that seems to be their choice.
 
But more importantly, safety investigations aren't about assigning "fault" to one party. They are about finding all of the factors that contributed to an accident, and recommending mitigations/safety fixes.

In Mountain View it is almost certain the AP, driver inattention, roadway design, and condition of the safety barrier (along, possibly, with other factors) all contributed to the crash.

This. It's typically a "perfect storm" of compounding errors that leads to catastrophic results. With Autopilot, there are two entities that should be watching the road (the driver and the AP), and they both need to fail simultaneously for tragedies like this to happen. Fortunately, this happens rarely. Tesla's defense of the Mountain View incident does not claim that AP is perfect; they acknowledge it has shortcomings and that they are continuously working to improve it. Perfection is unachievable and unrealistic.

But perfect storms happen, and any one improved factor would probably have mitigated this incident. With a better designed/marked roadway, and/or with improved software, AP might not have made the mistake it did. With an intact crash barrier, the impact would likely have been survivable. If the driver had been paying proper attention, he almost certainly would have been able to avoid the crash.

ALL of these fronts need to be improved. CalTrans needs to fix that section of the roadway. Tesla needs to keep improving their software, and possibly their hardware. (e.g. Lidar, additional radars/cameras, etc.) They need to better educate their drivers up-front who buy their AP-enabled cars, to point out known weaknesses and gotchas. (I had to discover for myself that AP is lousy at handling single-lane freeway interchanges, and also that it tends to incorrectly interpret pavement color changes as lane markings:
)

Currently Autopilot has two types of alerts: a "dumb" polite timer-based alert mechanism to touch the steering wheel, and a frantic beeping "Oh Crap" setting, when a collision or failure is imminent. The first doesn't actually cause drivers to pay more attention; it's more of a snooze button. I personally think they ought to implement an intermediate "Hey, I could use a second pair of eyes" chime, when it sees something slightly out of the ordinary or ambiguous or difficult up ahead, that it could likely handle itself, but not with 100% certainty. All drivers WILL zone out now and then on AP, but they won't do it (or at least not nearly as often) when the car is specifically requesting their attention for an actual reason. A situational chime would get me to pay a lot more attention when I hear it, even if it errs on the side of caution.
 
This. It's typically a "perfect storm" of compounding errors that leads to catastrophic results. With Autopilot, there are two entities that should be watching the road (the driver and the AP), and they both need to fail simultaneously for tragedies like this to happen. Fortunately, this happens rarely. Tesla's defense of the Mountain View incident does not claim that AP is perfect; they acknowledge it has shortcomings and that they are continuously working to improve it. Perfection is unachievable and unrealistic.

But perfect storms happen, and any one improved factor would probably have mitigated this incident. With a better designed/marked roadway, and/or with improved software, AP might not have made the mistake it did. With an intact crash barrier, the impact would likely have been survivable. If the driver had been paying proper attention, he almost certainly would have been able to avoid the crash.

ALL of these fronts need to be improved. CalTrans needs to fix that section of the roadway. Tesla needs to keep improving their software, and possibly their hardware. (e.g. Lidar, additional radars/cameras, etc.) They need to better educate their drivers up-front who buy their AP-enabled cars, to point out known weaknesses and gotchas. (I had to discover for myself that AP is lousy at handling single-lane freeway interchanges, and also that it tends to incorrectly interpret pavement color changes as lane markings:
)

Currently Autopilot has two types of alerts: a "dumb" polite timer-based alert mechanism to touch the steering wheel, and a frantic beeping "Oh Crap" setting, when a collision or failure is imminent. The first doesn't actually cause drivers to pay more attention; it's more of a snooze button. I personally think they ought to implement an intermediate "Hey, I could use a second pair of eyes" chime, when it sees something slightly out of the ordinary or ambiguous or difficult up ahead, that it could likely handle itself, but not with 100% certainty. All drivers WILL zone out now and then on AP, but they won't do it (or at least not nearly as often) when the car is specifically requesting their attention for an actual reason. A situational chime would get me to pay a lot more attention when I hear it, even if it errs on the side of caution.

Watched your video. That was a pretty long single lane highway section there. Do you think the speed you were traveling when you came to the curve was a factor--like it wasn't able to process the lane direction fast enough? Any idea what your speed was at that section going into the curve?
 
Watched your video. That was a pretty long single lane highway section there. Do you think the speed you were traveling when you came to the curve was a factor--like it wasn't able to process the lane direction fast enough? Any idea what your speed was at that section going into the curve?

My speed on the initial straight section was about 40mph, in a 65mph zone. (It may seem faster in the video, only because traffic on the main freeway to the left is almost stopped.) There was a car directly ahead of me, and I assumed AP would be able to track it properly and decelerate to match its speed around the curve. Instead, AP lost track of that car and accelerated into the curve, nearly losing control. (I'm pretty sure it would have if I hadn't yanked the steering wheel.) Evidently the AP thought the speed limit was still 65mph through the curve, despite the clearly posted 20mph limit sign, despite the fact that the road lines curved way too strongly to be driveable at 65mph (or even 40mph), and the fact that the nav was programmed to take this sharp curve.

I plan to try again (carefully) on this road with the latest AP software update and see if the behavior is any different.
 
Last edited:
  • Informative
Reactions: McRat
I think just about everyone agrees that more-or-less passive safety systems, such as Automatic Emergency Braking, Lane Departure Warning and the like improve vehicle safety. These systems leap in when the driver has made a mistake.

Although Tesla has kind of made some safety claims with respect to Autosteer, I don't think anyone has really proved that Autosteer (as implemented) actually improves safety. I view it as more of a convenience feature. If most of the crashes that it avoids are crashes that a human driver would also have avoided, but it is causing serious crashes that occur when it makes a mistake, but the driver fails to correct the mistake, then I think it is hard to justify AS as safe. We don't really have the data on this. It's something NTSB should look at.

Also, if a safety feature has elements that, as designed, are making the safety feature less safe than it would be if the elements were eliminated or modified, those elements should really be eliminated/modified ASAP, not on Elon Standard Time.

Autosteer is a lane departure warning system. Whereas other version will warning beep you all the way into the ditch "you're gonna crash,wheel shaje/nudge, you're gonna crash, nudge, you crashed", autosteer goes "you're leaving your lane, I think. I'm going to steer us back to the center, but if I'm wrong, feel free to over ride me at any point"

The new versions seem to have swerving/ truck love addressed, so the only major crash inducing failure mode the system could induce with minimal warning to the driver would be unwarranted emergency braking. That is why AEB us not where people would like it to be. If it is too sensitive, it is unsafe.
 
Let's get a few facts straight. The accident where the pedestrian was struck and killed was an Uber test car NOT Tesla. The so called safety driver was clearly not doing his/her job. The Uber car was using LIDAR a technology that Tesla does not support. Tesla was faced with a dilemma in the Model X crash. Wait for the complete government evaluation to come out months from now while its reputation is sullied in the press or release the data that Tesla had which showed the person behind the wheel did not respond to the system prompting him/her to take control. I think is was short sighted on the NTSB's part to sideline Tesla and their expertise in this investigation. No one can point their car on dumb cruise control at a concrete wall, not steer away and not expect a bad outcome. The first person killed in a Tesla on AP allegedly had posted multiple YouTube videos of him letting the car "drive itself" while he watched movies, etc. We all have to take final responsibility for a car's behavior at least for now. Musk said autopilot, like everything in the real world, will never be perfect.
 
  • Like
Reactions: EVie'sDad
Please check the definition of AP L2

Level 2 An advanced driver assistance system (ADAS) on the vehicle can itself actually control both steering and braking/accelerating simultaneously under some circumstances. The human driver must continue to pay full attention (“monitor the driving environment”) at all times and perform the rest of the driving task.

From NHTSA, which also has other interesting tidbits like expected feature timing and lack of planned regulation below Level 3.

But what specifically were you referring to?
 
I can tell if I'm out of my lane while looking in the rear view mirror. It's pretty obvious.

I think we can safely conclude that eyes were not on the road. People are now dying in greater numbers due to texting than drunk driving.
We? Don't speak for me, please. I'm not going to jump to conclusions because I wasn't there. My entire post was critical of jumping to conclusions. You can't say at this point that thedriver was irresponsible. If there is evidence proving this than so be it. All of us check our mirrors or look over our shoulders at splits and merges. We take our eyes off the road (ahead) for 3 to 5 seconds. Not everyone has your magic third eye. All I am saying is do not assume the driver was irresponsible. Or do you have evidence otherwise?
 
Tesla's whole notion of "beta testing" with customers (whether incentivized or "volunteering") is unique in the auto industry. This sort of testing is common in software/websites; not in products like cars that can kill people. Manufacturers of cars traditionally do their testing using professional test drivers/road testers who follow a testing protocol and take record lots of information. This happens before members of the public start receiving the vehicle.
Yet with all that testing they still produce flawed products. The idea that a product not in beta is perfect is false. Since any product can always be improved they are essentially all in beta.
 
  • Like
Reactions: T34ME and bhzmark