Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

WARNING: I rear-ended someone today while using Auto Pilot in my brand new P90D!

This site may earn commission on affiliate links.
The FCWS fires off frequently. While mostly due to my "spirited" driving style (it doesn't know I'll switch lanes before plowing into the a**hole doing 15 under in the hammer lane), it also seems to pick up the occasional reflective sign or marker. Unlike the Forward Collision Warning system in the MS, there is no setting to adjust sensitivity/timing of the notification. Any chance that setting would affect AEB?

I've had FCW go off _many_ times even when set to late, I have never had AEB go off. I still contend that AEB will not prevent a collision, only reduce it's severity and thus will only activate when collision is imminent.
 
If I'm being too blunt, I apologize, but I'd love to hear about the technical information that supports your singular answer.

I hope that it wasn't, but then again, hardware failures occur, and it would possibly be comforting to know (but obviously not from a litigation perspective).
If the hardware failed 100%, then NOTHING CHANGES AS TO DRIVER RESPONSIBILITY.
If the hardware prevents a collision, great, it managed to help prevent the driver from making a mistake, and improved the situation. If the hardware does not manage to prevent a collision, then the worst possible case is that the driver does exactly what they would have done without the hardware in place. Either way, the hardware didn't make anything worse, it only failed to make it better.

There's no technical information needed. The only way the AP could have any fault whatsoever is if it actively prevented the driver from doing what they should have done. Nobody has ever implied this to be the case.

Imagine you came up with a way to prevent 95% of traffic collisions with a fancy new device, should you install it in as many cars as you can to get that huge life saving advantage? or should you not do it because those 5% of people who weren't saved will blame you for not saving them, even though they're no worse off than they were before your device? This isn't fanciful thinking, this is basically the exact situation we're dealing with here. People want Tesla to take some responsibility, even though they've been very clear about what the system can and can't do, and even though the driver of a vehicle equipped with the system can not, by definition, be any worse off than if they didn't have the system at all.
 
Seems the new 7.1 Owners Manual is explicit in stating Tesla's view point.

Warning: Traffic-Aware Cruise Control is designed for your driving comfort and convenience and is not a collision warning or avoidance system. It is your responsibility to stay alert, drive safely, and be in control of the vehicle at all times. Never depend on Traffic-Aware Cruise Control to adequately slow down Model S. Always watch the road in front of you and be prepared to take corrective action at all times. Failure to do so can result in serious injury or death.
 
Last edited:
Seems the new 7.1 Owners Manual is explicit in stating Tesla's view point.

Warning: Traffic-Aware Cruise Control is designed for your driving comfort and convenience and is not a collision warning or avoidance system. It is your responsibility to stay alert, drive safely, and be in control of the vehicle at all times. Never depend on Traffic-Aware Cruise Control to adequately slow down Model S. Always watch the road in front of you and be prepared to take corrective action at all times. Failure to do so can result in serious injury or death.

Written by lawyers for Tesla's protection in far out, corner cases, and it's self-contradictory.

If the OP is reporting accurately, this was NOT a far out, corner case.

While I've only read the first and last few dozens of posts, I have an educational background in Human Factors Engineering, and also the viewpoint of having both "Classic" and an AP Model S's in the garage.

The OP is to be highly commended for pointing out what happened, and should not have been subject to the remarkable vitriol from many posters here, some of whom appear to not even be driving AP MS's. Here's a tip, especially for them: You're not in a position to understand the HF changes that occur in using AP, ESPECIALLY over time. Tesla cannot introduce an AP system that works 99.9% of the time. Or even 99.99% of the time. We need better than six sigma reliability here.

Random thoughts:

1. Of great concern, if the AP failed--which appears to be the case based on OP's comments--the system needs to have more aggressive/robust system monitoring and failure alerting.

2. Given the OP's and car's inexperience with AP, a more cautious approach would have had the distance set to far more than "2," and the guidance from the DS to set it at "2" wasn't good advice. Having said that, it is a legitimate setting and the OP was well within norms to use that setting.

3. The idea of AP is enhance the driving experience. If it works 999 times in a row, bringing the car to a safe stop, but then fails to do so on the 1,000th stop, this is a MAJOR problem for Tesla, notwithstanding the legal disclaimers. To have to intensely monitor AP to catch that "1 in a 1,000" time it won't stop the car is an absurd proposition.

4. Tesla should give the OP a P90D loaner, impound the accident car, have Tesla engineers carefully review logs and measure equipment mounting and connectivity, and then remove all the applicable AP components and bench test them until it finds out what went wrong.

Trust me when I tell you this as it's based on personal experiences that I need not get into in such a public forum (and were from events in 2013, dozens of SW revisions ago): Tesla is far from perfect; best to find errors early and quickly.
 
Last edited:
The driver is 100% responsible for the car. End of story.

What the automation did or did not do is a secondary discussion.

Personally I find that TACC/AutoPilot to be the best and defining feature of the car. I use it every day. Perhaps those of us who had TACC alone for months prior to AutoPilot being released had an advantage in getting used to how it (TACC) worked. There was a learning curve. Even today, I find that I often and frequently over ride TACC and manually brake to stay within my comfort zone. I also use setting 7 for TACC, always. Its my comfort zone and my responsibility to monitor the automation.

I fully support the OP in trying to understand what happened in his case. We need to know, so we can all become better users of the technology in the Tesla.
 
I've had a couple of close calls similar to this. The car started slowing down, but too gradually to stop, then I hit the brake at the same time the collision warning came on and the car stopped safely. All was well in the end, but it was clearly a TACC error.

I must say though, that since a couple of updates ago, I haven't seen this happen, but still don't trust it and am always watching it warily.

1. The two-car follow distance is way too close at highway speeds, TACC or no TACC. It simply doesn't leave enough time to react.
2. I have my collision warning set to "early," which has worked to perfection in giving me an early warning in the few circumstances when the TACC is a bit aggressive in braking distance.
 
SPECULATION: If AEB is designed to reduce the severity of an unavoidable collision, perhaps there is a speed threshold below which it will not activate. For example, if you are about to hit an object at 55mph, it will activate such that you only collide at 25mph. On the other hand, if it calculates you will hit at 5mph, it does nothing.

Rationale is that a 5mph impact is not particularly hazardous to the health of the occupants of the car.

I could imagine that a system like AEB is designed to err on the side of less intrusiveness. Otherwise, it could induce more collisions than it avoids.

PERSONAL ANECDOTES:

I have used AP in Chicago expressway congestion (Eisenhower) regularly for the last 2+ months. I always have the unitless following distance set to "1" (above ~30mph, I turn off autopilot or monitor closely). Prior to this thread, I have been pretty trusting of AP - sometimes getting distracted enough with my phone to leave the car entirely to its own devices (in congestion, under ~30mph).

I appreciate OP's sharing of his experience. Since this thread, I've made a concerted effort to be less distracted =).

AP has continued to be quite reliable for me in my usage. The only "chink in the armor" that I have detected is that like many in this thread, I have also observed that it is slow to transition to tracking the "correct" car when lane changes occur in front. It is by good fortune that I have yet to incur any collisions from this behavior.
 
SPECULATION: If AEB is designed to reduce the severity of an unavoidable collision, perhaps there is a speed threshold below which it will not activate. For example, if you are about to hit an object at 55mph, it will activate such that you only collide at 25mph. On the other hand, if it calculates you will hit at 5mph, it does nothing.

Rationale is that a 5mph impact is not particularly hazardous to the health of the occupants of the car.

I could imagine that a system like AEB is designed to err on the side of less intrusiveness. Otherwise, it could induce more collisions than it avoids.

PERSONAL ANECDOTES:

I have used AP in Chicago expressway congestion (Eisenhower) regularly for the last 2+ months. I always have the unitless following distance set to "1" (above ~30mph, I turn off autopilot or monitor closely). Prior to this thread, I have been pretty trusting of AP - sometimes getting distracted enough with my phone to leave the car entirely to its own devices (in congestion, under ~30mph).

I appreciate OP's sharing of his experience. Since this thread, I've made a concerted effort to be less distracted =).

AP has continued to be quite reliable for me in my usage. The only "chink in the armor" that I have detected is that like many in this thread, I have also observed that it is slow to transition to tracking the "correct" car when lane changes occur in front. It is by good fortune that I have yet to incur any collisions from this behavior.

That's exactly what it does. It does not work below 5 mph. This is from the Owners Manual:

"When Automatic Emergency Braking has reduced the driving speed by 25 mph (40 km/h), the brakes are released. For example, if Automatic Emergency Braking applies braking when driving at 56 mph (90 km/h), it releases the brakes when the speed has been reduced to 31 mph (50 km/h).
Automatic Emergency Braking operates only when driving between 5 mph (8 km/h) and 85 mph (140 km/h).
Automatic Emergency Braking does not apply the brakes, or stops applying the brakes, in situations where you are taking action to avoid a potential collision. For example:
• You turn the steering wheel sharply.
• You press the accelerator pedal.
• You press and release the brake pedal.
• A vehicle, motorcycle, bicycle, or pedestrian, is no longer detected ahead."
 
Because everyone knows that TACC is not 100% reliable, which is why Tesla has told people never to ever rely on it to stop the vehicle.

The much bigger question is why didn't the driver stop the car?

That's one question yes. But many of us would like to know if there's something in the system (primary software) that could have made this less likely. If there is and it can improved then that would be great. The lack of transparency is what's causing concern.
 
Because everyone knows that TACC is not 100% reliable, which is why Tesla has told people never to ever rely on it to stop the vehicle.

The much bigger question is why didn't the driver stop the car?

In situations as the OP where the car is following another car that slows at a normal rate, I've had TACC stop the car reliably thousands of times. Undoubtedly, OP was not monitoring the car closely enough to realize that TACC was about to fail and intervene, which he has acknowledged. That is no question at all. The question is obviously why did TACC fail in this particular situation and it appears we will not get that answer from Tesla. One would assume that an accident that took place in any autopilot situation would be of great interest to Tesla and one is left to wonder if they don't know more than they are telling the OP (perhaps in order to prevent any question of liability).

Beating the the drum for the case that OP should have detected and prevented the crash is a waste of time. Personally, I appreciate the heads-up from the OP and am monitoring TACC very closely now, like I did in the early days. One hopes that behind the scenes Tesla has taken note of the TACC failure and is attempting to prevent it from happening again. We all know about the challenges when a car pulls into our lane or when approaching stopped traffic at a high rate of speed, but I personally have never experienced anything close to the scenario described by the OP.
 
Because everyone knows that TACC is not 100% reliable, which is why Tesla has told people never to ever rely on it to stop the vehicle.

The much bigger question is why didn't the driver stop the car?

i want to reiterate here that i tried my damndest to stop the car. As i've repeated numerous times on this thread (and you can choose to believe me or not) the system had been working flawlessly, and according to TM i didn't disconnect AP or TACC inadvertantly. When i saw a collison was imminent, my (perhaps too slow?) reflexes took over and i slammed on the brakes. The alarm that went off was immaterial to the collision-it didn't help me or hurt me. I remember clearly the sickening feeling immediately preceeding the collision.

At the time, i felt like the system failed me. I was pissed at my car. It was only after reflecting on things that i concluded that i was entirely at fault for being complacent, for assuming the car was essentially driving itself. I was inexperienced (and i guess stupid) about the possibility the car might not stop.

i admit to feeling embarassed that this happened to me (and seemingly no one else). If nothing else, perhaps this thread will alert people to be more careful and on-guard.

Finally, as i watched yesterday's NFL playoff games (crazy), it seemed like there was a flood of commercials touting "emergency braking." I felt they were somehow mocking me. So my conclusion is that my MS does not have this feature? On the commercials for BMW it seemed like this feature stops their cars on a dime. What's different with the MS?
 
+++1

Written by lawyers for Tesla's protection in far out, corner cases, and it's self-contradictory.

If the OP is reporting accurately, this was NOT a far out, corner case.

While I've only read the first and last few dozens of posts, I have an educational background in Human Factors Engineering, and also the viewpoint of having both "Classic" and an AP Model S's in the garage.

The OP is to be highly commended for pointing out what happened, and should not have been subject to the remarkable vitriol from many posters here, some of whom appear to not even be driving AP MS's. Here's a tip, especially for them: You're not in a position to understand the HF changes that occur in using AP, ESPECIALLY over time. Tesla cannot introduce an AP system that works 99.9% of the time. Or even 99.99% of the time. We need better than six sigma reliability here.

Random thoughts:

1. Of great concern, if the AP failed--which appears to be the case based on OP's comments--the system needs to have more aggressive/robust system monitoring and failure alerting.

2. Given the OP's and car's inexperience with AP, a more cautious approach would have had the distance set to far more than "2," and the guidance from the DS to set it at "2" wasn't good advice. Having said that, it is a legitimate setting and the OP was well within norms to use that setting.

3. The idea of AP is enhance the driving experience. If it works 999 times in a row, bringing the car to a safe stop, but then fails to do so on the 1,000th stop, this is a MAJOR problem for Tesla, notwithstanding the legal disclaimers. To have to intensely monitor AP to catch that "1 in a 1,000" time it won't stop the car is an absurd proposition.

4. Tesla should give the OP a P90D loaner, impound the accident car, have Tesla engineers carefully review logs and measure equipment mounting and connectivity, and then remove all the applicable AP components and bench test them until it finds out what went wrong.

Trust me when I tell you this as it's based on personal experiences that I need not get into in such a public forum (and were from events in 2013, dozens of SW revisions ago): Tesla is far from perfect; best to find errors early and quickly.
 
Beating the the drum for the case that OP should have detected and prevented the crash is a waste of time.
Yes, that point has been made by many people, and the OP has acknowledged he was at fault. I commend him for taking responsibility, though by his own admission his first reaction when speaking to the other driver involved was to disavow responsibility.
The point has also been made repeatedly that Tesla is not going to publicly provide a detailed analysis of the accident based on the data stream recorded by the car. Some people think they should, I disagree and see no reason why Tesla would do that for reasons I posted above. Of course internally Tesla will analyze the data but we will not learn the results of that analysis.
 
Yes, that point has been made by many people, and the OP has acknowledged he was at fault. I commend him for taking responsibility, though by his own admission his first reaction when speaking to the other driver involved was to disavow responsibility.
The point has also been made repeatedly that Tesla is not going to publicly provide a detailed analysis of the accident based on the data stream recorded by the car. Some people think they should, I disagree and see no reason why Tesla would do that for reasons I posted above. Of course internally Tesla will analyze the data but we will not learn the results of that analysis.

This is true and the only remedy left to get closure is a legal one which doesn't make financial sense for a fender bender like this.
 
i want to reiterate here that i tried my damndest to stop the car. As i've repeated numerous times on this thread (and you can choose to believe me or not) the system had been working flawlessly, and according to TM i didn't disconnect AP or TACC inadvertantly. When i saw a collison was imminent, my (perhaps too slow?) reflexes took over and i slammed on the brakes. The alarm that went off was immaterial to the collision-it didn't help me or hurt me. I remember clearly the sickening feeling immediately preceeding the collision.

At the time, i felt like the system failed me. I was pissed at my car. It was only after reflecting on things that i concluded that i was entirely at fault for being complacent, for assuming the car was essentially driving itself. I was inexperienced (and i guess stupid) about the possibility the car might not stop.

i admit to feeling embarassed that this happened to me (and seemingly no one else). If nothing else, perhaps this thread will alert people to be more careful and on-guard.

Finally, as i watched yesterday's NFL playoff games (crazy), it seemed like there was a flood of commercials touting "emergency braking." I felt they were somehow mocking me. So my conclusion is that my MS does not have this feature? On the commercials for BMW it seemed like this feature stops their cars on a dime. What's different with the MS?


I will make the casual observation that a lot of the folks in this thread that are jumping up and down about driver responsibility might find themselves in a very similar situation. I have real questions about how long it takes anyone to evaluate sensory inputs and make the decision to take over control from the automatic system. That's especially true in a circumstance where the vehicle performs properly 999 times out of 1000. That's one of the reasons why, even if my vehicle had this system, I don't think I would use it. I prefer to *know* that I am in control, rather than wonder whether the machine is going to fail.

I know everyone (including myself) tends to think of themselves as an above average driver. Statistically, of course, that's nonsense, and the few times I've driven competitively have been humbling in that regard.

TL; DR? We are all living in glass houses, driver-talent-wise. We should be careful about the stones we throw.
 
Because everyone knows that TACC is not 100% reliable, which is why Tesla has told people never to ever rely on it to stop the vehicle.

The much bigger question is why didn't the driver stop the car?

This is the most infuriating thread. The driver didn't stop the car because he thought AP/TACC/SOMETHING in this amazing car was going to stop the car automatically like it had 500 times before. OK? Can we stop pretending that is a question?

At the same time, the driver has accepted full responsibility for the accident. So can we stop acting as if he didn't?

All he wants to know (and all the clear-headed people on this thread want to know) is: what happened? Did he accidently turn TACC off? Did a sensor fail? Is there a corner case where the algorithm fails - what might that corner be? You'd think Tesla would want to know, because it would help them make a car where this would be less likely to happen (whether it was the driver's fault or the car's).

Maybe I can break it down another way: LEGALLY, it's the drivers fault. Done, end of story. SCIENTIFICALLY, we don't know what happened and would like to find out, because all of us rooting for Tesla to become more and more successful want to see this happening less and less in the future, regardless of the drivers' competence.

All the posters smugly saying "it's the driver's fault, end of story" are the worst kind of blind "you're holding it wrong" fanboys - not only are you not helping the company you're such a fan of get better, you're making forward progress that much more difficult.

So can we please have a New Years Miracle and let the rest of the posts in this thread be about why the crash happened and not about who's fault it was? Because the latter question has been answered 50 times by the OP himself.