Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
...Does Tesla's autopilot failed to take actions to avoid the imminent collision to the barrier?

I still don't understand why this question is still being asked.

Please go to the manual.

Do a "CTRL-F" and find the phrase "stationary vehicle".

Autopilot is still in beta. Some day it will reliably stop for stationary objects but for now, it is prudent to doubt its capability.

That's the meaning of beta. Remember, when I first bought mine, it couldn't work beyond 45MPH. It's an unfinished product then, and it still is now!

...Tesla should conduct a through test on the road marking at the crash site with various conditions such as different time of day/night and rain etc...

I too wish Tesla would show the videos of all the testings that have passed certain competencies.
 
Last edited:
114 pages on this one topic alone?!?:eek:

It's not a singular topic.

At first it was about the road, and how factors of the road played into accidents that happen routinely there. Then there was an even more important find that the crash barrier wasn't reset, and it's extremely likely that this fatality wouldn't have happened had it been reset.

Once it was discovered that AP was involved then people took their perspective corners based on their own biases, and world views.

Which is expected, but there is no way of knowing what went on in that car that day. There isn't a whole lot to go off of or explanations as to why the driver did what he did. The entire justification for AP is in the fact that we can predict with pretty good accuracy what went wrong.

With a human we don't have that.

With humans truth is often stranger than fiction. It is because fiction is the story we make up to comfort other people or to comfort ourselves. If we didn't have this fiction we'd all be massively OCD. Or we'd be wrapped in bubble wrap, and too scared to actually live life.

You mix in the unpredictability with humans with the unpredictability of life, and things happen that are completely unexplained.

Sometimes it's something tremendously great, and sometimes it's awful.

Society as a whole doesn't like that though especially when it comes to people dying before their time.

We can all battle it out based on what we think happened, and what we think could reduce the chances of it.

But, at the end of the day the grim reaper was just getting his quota in and found a way to do it. Probably some weird set of circumstances/happenstances that caused it. Or maybe he just glitched and did something he shouldn't have done like txting, and it was horrible timing. All of us have glitched at some point where we did something we knew was stupid.

The vast majority of us have also experienced weird situations that are really tough to explain everything that had to come together for the event to unfold the way it did. Where we didn't realize what was happening until too late, and even on our best days we couldn't have prevented it.

Earlier today a 16 year kid suffocated in his car because somehow he got trapped in some rear folding passenger seat. How he ended up in that circumstance is anyone's guess.

The really troubling thing about the whole thing wasn't the event itself, but the technology that would have saved him wasn't in place in his phone (an iPhone). But, I think it will be soonish. He used Siri to call for help, but the police couldn't find him. The iPhone didn't have any technology to allow a 911 operator to find the caller quickly.

In some other forum somewhere there is likely the same kind of arguments where someone is talking about suing the vehicle manufacture, and someone else is saying to sue apple.
 
That's the meaning of imperfect, unfinished, beta product. Competencies are accomplished incrementally, not all at the same time.

I think releasing a feature with known limitations is perfectly fine, and it's a standard practice across industries. However, most expect there being some type of safeguards to prevent the operator from operating the machine outside of the feature limitations. For example, fly-by-wire has variables control laws that it goes to when it encounter faults. Only with the very last backup is the pilot the sole safeguard of the airplane.

In the case of the Tesla Autopilot, the current safeguard appears to be the driver and the driver only. If the driver is not monitoring the car constantly, then the car can possibly operate outside of its limitation. Question then is that whether the driver should be the first and only safeguard, or should there be another layer or two of protections before a driver needs to intervene. We all know today's drivers are often distracted, so should Autopilot count on a potentially distracted driver to correct its mistake when Autopilot is supposed to be there to help distracted driver?
 
...How he ended up in that circumstance is anyone's guess...


Kyle Plush turned around from his third row seat, knelt down looking toward the back, then bent downward into the trunk to reach for tennis gears. The third row seat then was tilted backward. The top of its back now pinned his chest against the rear hatchback door. He's now up side down with his legs free above the top of the third row seat and his head at the bottom of the trunk:


upload_2018-4-12_23-11-47.png

His mom didn't see him coming home after tennis practice so she called police and she traced the i-phone position so the police could find him but by that time it was too late.

Many people want foldable seats but at what cost?
 
I think releasing a feature with known limitations is perfectly fine, and it's a standard practice across industries. However, most expect there being some type of safeguards to prevent the operator from operating the machine outside of the feature limitations. For example, fly-by-wire has variables control laws that it goes to when it encounter faults. Only with the very last backup is the pilot the sole safeguard of the airplane.

In the case of the Tesla Autopilot, the current safeguard appears to be the driver and the driver only. If the driver is not monitoring the car constantly, then the car can possibly operate outside of its limitation. Question then is that whether the driver should be the first and only safeguard, or should there be another layer or two of protections before a driver needs to intervene. We all know today's drivers are often distracted, so should Autopilot count on a potentially distracted driver to correct its mistake when Autopilot is supposed to be there to help distracted driver?

All L2 semiautonomous systems on the road are by design that way. Where the backup for the system making a bad decision is the driver.

Some systems are purposely dumbed down as to make sure the driver doesn't become too complacent. Other systems are nothing more than lane keeping which I don't really count.

Some systems differ in how long the driver can go without holding the steering wheel. but I think most allow for the amount of time that transpired in this event.

There will be additional layers of protection for anything greater than Level 2 when the sole responsibility will finally start to shift away from the human.

Now that doesn't mean I don't agree with you. That maybe the whole idea of L2 driving isn't such a good idea.

I personally don't believe an L2 car should be allowed if the car can't stop for stopped objects of a reasonable size. Like making sure it stops if there is stalled car in the way is an example of an expectation.
 
I think releasing a feature with known limitations is perfectly fine, and it's a standard practice across industries. However, most expect there being some type of safeguards to prevent the operator from operating the machine outside of the feature limitations.

I think there also is an issue of whether Tesla does a good enough job of explaining to users what the limitations are and what the appropriate uses are; especially since these seem to change frequently due to over-the-air updates. It's not really enough to just say "This may not work perfectly" and "Driver must pay attention at all times" and then list a whole bunch of very broad liability-oriented warnings in the guidebook. The fact that it is very common for avid Tesla drivers to argue on these boards about what is or is not an appropriate use really shows that Tesla isn't getting the message out. Frankly, I think this is intentional; Tesla wants it to seem to the world like Autopilot is super-capable, but then blame the driver for misuse any time a crash occurs when AP is turned on. Easier to do that if the instructions are nonspecific but include very broad warnings.
 
  • Like
Reactions: Ben W
....Earlier today a 16 year kid suffocated in his car because somehow he got trapped in some rear folding passenger seat. How he ended up in that circumstance is anyone's guess.

The really troubling thing about the whole thing wasn't the event itself, but the technology that would have saved him wasn't in place in his phone (an iPhone). But, I think it will be soonish. He used Siri to call for help, but the police couldn't find him. The iPhone didn't have any technology to allow a 911 operator to find the caller quickly.

In some other forum somewhere there is likely the same kind of arguments where someone is talking about suing the vehicle manufacture, and someone else is saying to sue apple.

From what I read in the local Cincinatti Patch/Cincinnati.com and Time Magazine, the above is wrong on all kinds of fronts. It was after school and he went to get his tennis equipment from behind the third row seats in their Honda Oddessy SUV. The Cincinnati.com site (below) can explain what happened. He apparently couldn't lift the seat to get it off him and didn't have access to his phone other than to ask Apple's iPhone Siri to call 911 for him. Which he did by voice command a few times. 911 was able to pinpoint his location but the police didn't see him and the 2nd 911 operator didn't tell the police that the kid was in distress so they left the area not knowing the dire nature of the call.

Patch article - Tell Mom 'I Love Her If I Die', Teen Says As He Suffocates In Van
Cincinnati.com article with animation of how it happened - Chief Isaac: 'Something has gone terribly wrong,' dispatcher suspended
Time Magazine - Teen Who Suffocated in Minivan Seat Used Siri to Call 911 Twice. But Police Couldn't Find Him

Oh I see Tam covered this already. But yes, weird crap happens and we may never know exactly why the Tesla driver didn't take any evasive action when it was clear the car would crash into the barrier/divider and at that speed likely kill him.
 
Last edited:
From what I read in the local Cincinatti Patch/Cincinnati.com and Time Magazine, the above is wrong on all kinds of fronts

How is it wrong on all kinds of fronts? I only count 3-4 ways it was wrong. :p

I said "it was anyone's guess" because I was only using it as an example of some of the weird crap that happens. It's not like I didn't know we'd have a pretty good idea of how it happened by the time I wrote about it, but it wasn't that relevant to what I was getting at. The point was being trapped, and suffocating is not something you'd predict when going for your tennis racket. That the seat wouldn't be latched, and you'd suddenly find yourself in that predicament.

I got the date of the event wrong. It happened on Tuesday versus Today (Thursday).

The kid made the phone call using Siri, and the cops couldn't find him. The reason as I initially understood it is that the location information wasn't as accurate as the find my iPhone feature that his mom used to pinpoint his exact location.

But, apparently I was wrong about that.

The location information supposedly was within a few feet of the van.

As Kyle Plush pleaded for help, why didn't officers find him?

So I'm not sure the new AML feature that Apple just released with IOS 11.3 would have helped. I was a bit wrong about this feature because I didn't realize the AML feature isn't supported by the 911 service in the states.
 
Last edited:
  • Funny
Reactions: SMAlset
There have been several suggestions to rename Autopilot.

I have a suggestion: "A tipsy teenager that will hold your steering wheel for you".

Some samples, paraphrased or lightly altered from this thread.

"Recently drove from LA to Vegas and the tipsy teenager was able to steer for me almost all the way. It got confused near Vegas where the road is painted white"

"All tipsy teenagers on the road are by design that way. Where the backup for the teenager making a bad decision is the driver."

"Some day the tipsy teenager will reliably stop for stationary objects but for now, it is prudent to doubt their capability."

"I initially trusted my tipsy teenager too much. Off late, as I started doing machine learning at work, my trust in Tesla's tipsy teenager fell. "

"So if we need to be more vigilant using the tipsy teenager to hold the steering wheel than without, what nerdy tech gimmick did they sell us?!"

"The entire justification for having a tipsy teenager hold your steering wheel is in the fact that we can predict with pretty good accuracy what went wrong."

"I know that after a long trip where the tipsy teenager has been holding the wheel for me, I arrive much more relaxed and less fatigued".
 
Some of the factors leading to the fatality drawn as a swiss cheese model.

If you want to stop these accidents from happening, you need to do something on all three dimensions. The crash barrier is the last resort - but you will stop it before that point. The big question in a risk assesment is what the probability of a new one of this kind of crash?

Tesla is in a squeeze between their customers wishes and their marketing, but I expect to see a speed-dependent nag coming, demanding tourqe on the wheel every 2 seconds in speeds above 50 mph. If AP2 was more to trust we would not need that. The sad thing for us that has survived sloppy use of AP so far that with that kind of nag, AP is not useful anymore (at those speeds).


swiss tesla crash.PNG
 
When a company decides to be a party to an NTSB investigation (as opposed to just a source of information) the company is making a choice. It is opting to have direct access to investigative discussion as they occur and in exchange it agrees to not make statements about the topic of the investigation until after the investigation is completed.

There's an important reason for this rule. In general there will be a number of parties to an investigation, each of which may have contributed in some way to the mishap. Think, for example, of an airliner accident, where (at a minimum) the airline, the airplane manufacturer, and the pilots union will all be parties. These parties all provide frank information to the NTSB during the investigation, and have access to the information provided by other parties. Thus, if they speak publically before the report comes out, they are in the position to be cherry picking preliminary information provided within the investigation by the other parties. If parties are allowed to make such "leaks", no one will be willing to speak frankly during the investigation.

Tesla accepted a position where it would have access to this non-public preliminary investigation information, and agreed to the ground rule about no-public-statements. Then, while remaining a participant, it started making the exact kind of statements that are the reason for the rule: Statements that Tesla had no fault and the fault was entirely that of the driver (and Caltrans). The statements cherry-picked the available evidence. That's exactly what Tesla is not allowed to do: Get access to the non-public investigative information as a party, and then start talking about the accident. If it initially had chosen just to provide information to NTSB, but not to be a party receiving preliminary information, then NTSB wouldn't be having a sh!t-fit about Tesla's statements. By putting itself in a position where was receiving non-public information, Tesla took advantage of a non-level playing field when it spoke in public.

For an multiparty airline accident that makes sense. However, this is not that. This is a specific company's proprietary system that only they have the ability to interpret.

There are no other parties with information relevant to the investigation. There are no cockpit voice recorders, FAA maintenance records, radar data, or large debris fields to survey. One car, two secondary collisions, proprietary data logs. A totally one sided data stream situation on which Tesla gets access to zero other data by being a party.
 
  • Disagree
Reactions: NerdUno
Suggestion:
Tesla should conduct a through test on the road marking at the crash site with various conditions such as different time of day/night and rain etc...

Cadillac also should get their Super Cruise check out as well

Burning question:
Does Tesla's autopilot failed to take actions to avoid the imminent collision to the barrier? This is a safety feature Tesla owners deserved regardless of the driver failed to respond to the warnings from autopilot. What if the driver is incapacitated at that time (heart attack, for example...)?

What is the point of the test? The anecdotal report from his wife that that it didn't always fail. If you get 5 passes in 5 tests, it means little.
AP is designed to follow lanes and cars. If the lanes it can see go the wrong way, it does not have the level of sophistication currently to deviate outside the lines.

If the driver had a heart attack, then the car would theoretically stay in lane until the hands-on-wheel turned out whereupon it would stop. That itself is an improvement over pretty much any other car that would veer/ drift without steering inputs until it hit something.

You can say that Tesla (other drivers?) owners deserve full crash prevention, and that is what manufacturers are working on. But it is not there yet. What is available are systems that cover more cases than older cars could.

AP is not currently designed to swerve.
FCW is designed to help warn of a collision (see armored truck crash).
AEB is designed to help reduce the severity of a crash (see wiring way Phoenix crash)

None of them are rated or advertised to handle 100% of cases.
 
I think releasing a feature with known limitations is perfectly fine, and it's a standard practice across industries. However, most expect there being some type of safeguards to prevent the operator from operating the machine outside of the feature limitations. For example, fly-by-wire has variables control laws that it goes to when it encounter faults. Only with the very last backup is the pilot the sole safeguard of the airplane.

In the case of the Tesla Autopilot, the current safeguard appears to be the driver and the driver only. If the driver is not monitoring the car constantly, then the car can possibly operate outside of its limitation. Question then is that whether the driver should be the first and only safeguard, or should there be another layer or two of protections before a driver needs to intervene. We all know today's drivers are often distracted, so should Autopilot count on a potentially distracted driver to correct its mistake when Autopilot is supposed to be there to help distracted driver?

You are misrepresenting the situation. Tesla has driver assistance features, not driver replacement features. Any time you rely on the system to handle something for you, you are operating outside the system limitations.

AP does not count on a driver for anything, and the driver should not count on AP to do anything.The feature set may prevent a crash, or reduce the severity, but if there is a crash, the driver already failed (at fault cases). The driver should always be aware and ready to intervene.

Start with a normal car: driver must steer, accelerate, brake, and be aware.
Add cruise: does not removes requirement for driver to monitor speed for appropriateness.
Add FCW: even though it may warn you about a potential collision, that does not remove the requirement for the driver to be aware
Add EAB: even though it may reduce the severity of an impact, that does not remove the requirement to brake.
Add lane following: does not remove the requirement to control the car's heading.
 
For an multiparty airline accident that makes sense. However, this is not that. This is a specific company's proprietary system that only they have the ability to interpret.

There are no other parties with information relevant to the investigation. There are no cockpit voice recorders, FAA maintenance records, radar data, or large debris fields to survey. One car, two secondary collisions, proprietary data logs. A totally one sided data stream situation on which Tesla gets access to zero other data by being a party.

As a party, I believe that Tesla also gets access to things like preliminary thinking of NTSB staff, ideas of experts consulted by NTSB and statements of folks like Caltrans, the responding fire department and the driver's family. The rule also prevents Tesla from spinning that kind of non-public stuff.

More importantly, if Tesla didn't want to be restricted as to what it could say publicly, it shouldn't have signed on as a party. Or, while it was a party, it should have asked NTSB to sign off on Tesla making some sort of statement that was limited to a true safety warning or something that wouldn't compromise the investigation. But Tesla didn't do that. Instead it flagrantly broke its participation agreement and then, despite a stern warning from NTSB, followed up with an even more flagrant violation.
 
Last edited:
As a party, I believe that Tesla also gets access to things like preliminary thinking of NTSB staff, ideas of experts consulted by NTSB and statements of folks like Caltrans and the driver's family. The rule also prevents Tesla from spinning that kind of non-public stuff.

Sure, but all the useful info from them is public. Family is talking to media and the barrier condition is already known. Nothing further for Tesla, especially with the data logged on the car's system that removes the need for crash recreation.

More importantly, if Tesla didn't want to be restricted as to what it could say publicly, it shouldn't have signed on as a party. Or, while it was a party, it should have asked NTSB to sign off on Tesla making some sort of statement that was limited to a true safety warning or something that wouldn't compromise the investigation. But Tesla didn't do that. Instead it flagrantly broke its participation agreement and then, despite a stern warning from NTSB, followed up with an even more flagrant violation.

I can understand your point, but also Tesla's. The agreement makes NTSB an information filter. If they agreed to pass on more information, Tesla likely wouldn't have exited the agreement. However, NTSB was more of a wall than a door. It's an issue in contracts that not everything can be written, and you rely going in on a meeting of the minds. Again, for this situation I don't see Tesla releasing any of the other party's information.
 
However, NTSB was more of a wall than a door. It's an issue in contracts that not everything can be written, and you rely going in on a meeting of the minds. Again, for this situation I don't see Tesla releasing any of the other party's information.

It is well understood what NTSB's ground rules are. Basically, "No Unauthorized Statements. Especially not statements that cast blame on others or deflect blame from oneself" This isn't a case of Tesla innocently violating the rules, or sticking with the spirit of the rules. Tesla just flagrantly broke the rules, multiple times, and then had a hissy fit when NTSB kicked them off the investigation for breaking the rules.

I can understand why Tesla might have felt that the ability to participate in the investigation wasn't nearly as important to Tesla as the ability to publicly spin the accident. But it should have made a choice between the two, not tried to do both at the same time. The rule is that you can either be a party to an investigation or you can publicly spin the mishap being investigated. You can't do both.
 
Nothing further for Tesla, especially with the data logged on the car's system that removes the need for crash recreation.

I note that NTSB has possession of the car's onboard data recorder, not Tesla. And I believe that it is an ongoing point of annoyance for NTSB that Tesla won't provide NTSB and other authorities with devices/software that would allow the authorities to directly pull/analyze data from the onboard recorder.

No one should be super-comfortable with the fact that Tesla has sole control of the devices that read onboard recorders (and, frankly, all of the data that Tesla is scooping up over-the-air from the fleet). Tesla has a huge incentive to not properly read/relay information that would tend to show that Tesla was responsible and every incentive to present data in a manner that casts blame on the driver. I'm not saying that Tesla would falsify/spin/fudge such data, but even the fact that they have a huge incentive to do so (and ability to do so) is kind of scary. Also, the fact that NTSB is dependent on Tesla for reading crash data is problematic, because it means that even if Tesla chooses not to be a party to the investigation, it will still have information about what kind of data NTSB is seeking from the recorder and will also have access to the full downloaded data set. This wouldn't normally be true for a non-party.