Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
Don't kid yourself. The NTSB has subpoena power including the ability to compel Tesla employees to operate Tesla's decoding software in their presence in accordance with their requests. It's difficult to imagine many Tesla employees who would be willing to go to jail to protect Tesla's own self-interests in this investigation. So the truth will come out whether Elon likes it or not.
 
Nothing further for Tesla, especially with the data logged on the car's system that removes the need for crash recreation.

I think you are over-interpreting the value of the logged data. Some real-world recreation is likely to be useful/necessary to determine how the sun might have impacted driver visibility as well as the exact positions/approaches of the car that cause it to misinterpret the lane lines and the change in pavement surface. Clearly it only has problems interpreting these markings under some conditions (or approaches, speeds, angles, etc) but not others.
 
  • Like
Reactions: Matias
Don't kid yourself. The NTSB has subpoena power including the ability to compel Tesla employees to operate Tesla's decoding software in their presence in accordance with their requests. It's difficult to imagine many Tesla employees who would be willing to go to jail to protect Tesla's own self-interests in this investigation. So the truth will come out whether Elon likes it or not.

I agree... The truth will come out. But I dislike the fact that NTSB is in any way dependent on Tesla for reading the recorder. It should be readable by investigators/trusted third parties.
 
Don't kid yourself. The NTSB has subpoena power including the ability to compel Tesla employees to operate Tesla's decoding software in their presence in accordance with their requests. It's difficult to imagine many Tesla employees who would be willing to go to jail to protect Tesla's own self-interests in this investigation. So the truth will come out whether Elon likes it or not.

Which truly would be the "nuclear option". If a subpoena was filed it would be matter of public record, and there is no way Tesla could spin it as for public safety. (They have made some boneheaded decisions in the past, but that would be monumentally bad news).

What is possibly more interesting to me is they are still operating within the guidelines on other incidents. It seems mad they choose to be a party in some investigations, but not others.
 
Which truly would be the "nuclear option". If a subpoena was filed it would be matter of public record, and there is no way Tesla could spin it as for public safety. (They have made some boneheaded decisions in the past, but that would be monumentally bad news).

What is possibly more interesting to me is they are still operating within the guidelines on other incidents. It seems mad they choose to be a party in some investigations, but not others.

Makes you wonder whether they perceive a difference in culpability here, doesn't it?
 
  • Informative
Reactions: sillydriver
If the driver is inattentive, it does not make it right for autopilot drive toward the wall, right?
I’m not trying to be a Tesla apologist.. rather I’m sympathetic to the state of the current technology. An accident like this is currently (unfortunately) difficult to avoid with technology alone. It is therefore essential that the driver remain engaged.

Are we such a nanny state that “you can’t have nice things” because we can’t be 100% certain you won’t abuse it?
 
  • Like
Reactions: bhzmark
It is well understood what NTSB's ground rules are. Basically, "No Unauthorized Statements. Especially not statements that cast blame on others or deflect blame from oneself" This isn't a case of Tesla innocently violating the rules, or sticking with the spirit of the rules. Tesla just flagrantly broke the rules, multiple times, and then had a hissy fit when NTSB kicked them off the investigation for breaking the rules.

What I'm speaking of is the party's expectations of what would be authorized to release. Tesla basically released raw data.

I can understand why Tesla might have felt that the ability to participate in the investigation wasn't nearly as important to Tesla as the ability to publicly spin the accident. But it should have made a choice between the two, not tried to do both at the same time. The rule is that you can either be a party to an investigation or you can publicly spin the mishap being investigated. You can't do both.

Sure, and Telsa gained nothing by being a party, so the line of thought of not signing up is a reasonable one.

And what exactly do you think Tesla's word "Auto" means in "AutoPilot" ?? HINT: It doesn't mean car.

The problem is not what I think. The problem is people don't RTFM, place their hopes and expectations on AP and then complain when it doesn't meet them. Tesla's autopilot has more functionality that a boat or airplane's.

I note that NTSB has possession of the car's onboard data recorder, not Tesla. And I believe that it is an ongoing point of annoyance for NTSB that Tesla won't provide NTSB and other authorities with devices/software that would allow the authorities to directly pull/analyze data from the onboard recorder.

Possession of the HW is irrelevant once the data is copied off. No one outside of Tesla's AP team will be able to do anything with a raw data dump. Just like 99% of people can't do anything with a Windows blue screen or a Linux core dump.

No one should be super-comfortable with the fact that Tesla has sole control of the devices that read onboard recorders (and, frankly, all of the data that Tesla is scooping up over-the-air from the fleet). Tesla has a huge incentive to not properly read/relay information that would tend to show that Tesla was responsible and every incentive to present data in a manner that casts blame on the driver. I'm not saying that Tesla would falsify/spin/fudge such data, but even the fact that they have a huge incentive to do so (and ability to do so) is kind of scary. Also, the fact that NTSB is dependent on Tesla for reading crash data is problematic, because it means that even if Tesla chooses not to be a party to the investigation, it will still have information about what kind of data NTSB is seeking from the recorder and will also have access to the full downloaded data set. This wouldn't normally be true for a non-party.

Again, Tesla has the only people capable of parsing and interpreting the AP log data. (And maybe cut the unknown intentions comments, its akin to me saying "I'm not saying you work for a hedge fund, but people who work for hedge funds would post negative claims against Tesla. You are suggesting ulterior motives so the shoe fits, I'm not saying you're wearing it mind you, but you likely have feet...", not useful for discussion)

I think you are over-interpreting the value of the logged data. Some real-world recreation is likely to be useful/necessary to determine how the sun might have impacted driver visibility as well as the exact positions/approaches of the car that cause it to misinterpret the lane lines and the change in pavement surface. Clearly it only has problems interpreting these markings under some conditions (or approaches, speeds, angles, etc) but not others.

First, neither of us know what it logged. It could have full camera capture (which would have justified the 150 meters clear line of sign report). Second, people already did drive-bys at similar times along with sun angle studies (up thread). But most relevent, why would that be useful? It is already known that the sun can mess with the cameras (as stated in the manual). It is already known (and stated in the manual) that the system is not 100% reliable. What new data is there to discover? Even if they created a scenario, it will never match the original to the level needed to mimic the exact SW response.

I agree... The truth will come out. But I dislike the fact that NTSB is in any way dependent on Tesla for reading the recorder. It should be readable by investigators/trusted third parties.

Your feelings are yours, but how could an NTSB employee develop the knowledge of the AP system well enough to independently interpret the data? If you are going to put Tesla outside the circle of trust, you also can't rely on them for tools to decode the data.
 
  • Like
  • Disagree
Reactions: MP3Mike and NerdUno
I’m not trying to be a Tesla apologist.. rather I’m sympathetic to the state of the current technology. An accident like this is currently (unfortunately) difficult to avoid with technology alone. It is therefore essential that the driver remain engaged.

Are we such a nanny state that “you can’t have nice things” because we can’t be 100% certain you won’t abuse it?
I don't have problem with your idea of less than perfect state of the art technology. However, a minimum of showing some sign of the emergency stoping for imminent collision avoidance is paramount important prior to release this technology. I don't fault the autopilot for not able to read the confusing lane marking. But I do think the (attempting) collision avoidance is what I believe the first and foremost requirement for deployment of autopilot. The autopilot was aware of the imminent collision according to Tesla's statement (6 second prior to collision warning). But Tesla's AI software in autopilot is to let the car run straight into a concrete barrier is beyond my understanding. In this case, we still don't know the state of the driver's capacity in controlling his vehicle. The bus should stop at the autopilot's collision avoidant AI right? Is this safety requirement too much to ask for?
 
Wow @mongo you got to now,be the big tesla apologist here.. taking over the spot of @stopcrazypp @JeffK and @Reciprocity

Even all three combined wont touch the level you are at.

Astonishing!

I'll have to work on that, I am not trying to be an apologist. Having worked in automotive, software, and radar systems, I'd like to think I have something to offer the collective group. I try to point out false data and assumptions, but I really do not want to be creating excuses for failures.

Thanks for the heads up! Please feel free to PM me in the future if I go off the rails.
 
  • Like
  • Funny
Reactions: Zythryn and NerdUno
I don't have problem with your idea of less than perfect state of the art technology. However, a minimum of showing some sign of the emergency stoping for imminent collision avoidant is paramount important prior to release this technology. I don't fault the autopilot for not able to read the confusing lane marking. But I do think the (attempting) collision avoidant is what I believe the first and foremost requirement for deployment of autopilot.

Cruise control was introduced before FCW/AEB were ever thought of. With the existence of radar systems, are you advocating that cruise should be banned without radar?

The autopilot was aware of the imminent collision according to Tesla's statement (6 second prior to collision warning). But Tesla's AI software in autopilot is to let the car run straight into a concrete barrier is beyond my understanding. In this case, we still don't know the state of the driver's capacity in controlling his vehicle. The bus should stop at the autopilot's collision avoidant AI right? Is safety requirement too much to ask for?

Where are you getting that from? Tesla stated:
The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
I have seen no mention of a collision warning, so if there is other data out there, I am very interested.
 
  • Like
Reactions: MP3Mike
I don't understand how people can say this. Did you not take the car out for a test drive with a sales person? I understood within my first visit to Tesla that Autopilot is not autopilot. The other big clue is that you have to pay an additional $4000 to upgrade to FSD (when available). If it was already autopilot what would be the need for an additional upgrade?
What if you allready paid additional 4000 to upgrade to FSD?
 
So the truth will come out whether Elon likes it or not.

Funny, what "truth"?

This isn't "Model X-Files"

The system is pretty easy to understand, it follows lines. When there are badly maintained lines, a normally paying attention driver will handle fine.

Driver not paying attention at all = bad accident.

I'm not sure what people are expecting will come out of this beyond "conclusion: driver not paying attention"

That's the only truth out there.
 
I'm not sure what people are expecting will come out of this beyond "conclusion: driver not paying attention"

I am curious if the conclusion will drive Tesla to make changes to further attempt to mitigate people misusing the system. That would be the potential other outcome beyond just "the driver wasn't paying attention".

In the last driver wasn't paying attention AP fatality, we got the increased nag system we have today vs the one from AP 7.0
 
Cruise control was introduced before FCW/AEB were ever thought of. With the existence of radar systems, are you advocating that cruise should be banned without radar?



Where are you getting that from? Tesla stated:
I have seen no mention of a collision warning, so if there is other data out there, I am very interested.
Cruise control does not have AI, therefore, no problem using it as is.
That was the ONLY object that has triggered multiple warnings. If it is not a concrete barrier, can you think of anything that is warning worthy?
Finally, would you drive a car that doesn't have brake?
 
Cruise control does not have AI, therefore, no problem using it as is.
How does the human differentiation of the software type alter it's function? They are both just microprocessors running code.

That was the ONLY object that has triggered multiple warnings. If it is not a concrete barrier, can you think of anything that is warning worthy?
What warnings are you talking about?
The driver had received several visual and one audible hands-on warning earlier in the drive
is talking specifically about hand detection warnings and occurred at a different point in time to the crash..

Finally, would you drive a car that doesn't have brake?
I have driving vehicles with severely reduced braking due to the NTSB NOT forcing GM to issue a recall on rust prone brake lines. Also due to two other cases of failed lines/ hoses and one case of a failed vacuum booster.
Why? What does that have to do with anything?
 
And maybe cut the unknown intentions comments, its akin to me saying "I'm not saying you work for a hedge fund, but people who work for hedge funds would post negative claims against Tesla. You are suggesting ulterior motives so the shoe fits, I'm not saying you're wearing it mind you, but you likely have feet...", not useful for discussion

This isn't me saying "I don't trust Tesla" (in fact, I don't, but that's not relevant). This is me saying that an independent governmental safety investigation shouldn't ever be in a position where it must trust any one interested party. The whole point of such an investigation is to provide an unbiased set of expert eyes that can independently collect, review and interpret all relevant information and basic assumptions. I would be just as concerned about the process if Tesla were a company that I did trust. The point is that NTSB should accept assistance from parties but should never be absolutely dependent on such assistance.

Remember, the main purpose of this sort of investigation is not to allocate guilt, or liability, or financial damages. It is to have an impartial expert engineering investigatory organization look at a novel mishap, gather as much information about the mishap as possible, identify all of the factors that contributed to the mishap, and make recommendations for how those factors can be mitigated in the future to improve public safety.

It is likely that there will be lots of factors identified here; and recommendations made regarding semi-autonomous system designs and safety limitations; the types of instructions that need to be given to drivers of cars with semi-autonomous systems; the designs of highway interchanges and safety barriers; and methods for firefighters responding to electric car fires. These recommendations (and what amounts to a case study) will be available to and taken into consideration by safety regulators, car manufacturers, fleet operators, highway operators and first responders. The point is not to "punish" Tesla or to blame Telsa. That's what the courts are for. The point is to let all industry participants learn from whatever happened here so that they will learn the lessons of this accident and not repeat whatever mistakes might have been made here.

As someone who uses the highways, it is very important to me that raw data needed for this process not be siloed behind a proprietary wall put up by one interested party. If carmakers are going to start collecting rheems of data (especially on in-vehicle data recorders but also via wireless connections between the car and the manufacturers), they need to work with NTSB to make sure that NTSB (and probably independent experts/academics) have a means of directly accessing recorded data and a pathway to developing an independent ability to interpret/analyze that data.
 
...Is this safety requirement too much to ask for?

It is not too much to strive for but the question is how soon will that goal be realized.

Again, as mentioned before, when I first got my Autopilot, the maximum speed that I could activate it was 45 MPH on my CA-99 freeway speed limit of 70 MPH. Was it too much for me to ask for Autopilot to work at a normal freeway speed and I was not asking for super speed here?

At that time, no matter what I demanded, that's its capability at that time. I had to wait for Tesla engineers to finish working on speed increase.

Back to car's inability to consistently avoid crashing into a stationary object:

Radar is a flawed technology that scientists have been working to perfect it since very old time.

It has a problem of recognizing which object is dangerous, and which is not.

In World War II, Allies airplanes would release aluminum chaff which rendered Germany's radars useless because very small harmless pieces of aluminum was just indistinguishable to very big dangerous bombers.

Now, back to this modern day:

Why Tesla's Autopilot Can't See a Stopped Firetruck
 
Last edited:
Tesla basically released raw data.

What Tesla did was nothing like a release of raw data. The raw data is likely a huge amount of measurements (maybe even camera/radar images) likely recorded at great frequency. That raw data would be a huge data file, not a three paragraph statement to the press.

What Tesla released was (at best) a set of cherry-picked interpretations of the data (ie "the driver would have had ___ seconds to react" [without information about what assumptions/data underly that statement or whether it takes into account things like sunglare] and "the Tesla gave __ warnings" [without information about when those warnings occurred]) and some conclusions about blame/cause.