Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Three recent autopilot/cruise control accidents

This site may earn commission on affiliate links.

Papafox

Well-Known Member
Supporting Member
Jan 12, 2013
5,846
91,093
Hawaii
Referral Code
Some media stories have been implying that Tesla's autopilot is at fault for three recent accidents involving summons, autopilot, or traffic aware cruise control (TACC). Although the software likely worked as written in all these cases, different conclusions can be reached for each incident.

The first accident was the accidental summons command where the Tesla driver likely thought he was placing the car in park but inadvertently pushed the park button twice and placed the car in summons- move forward mode.
Solution: Tesla very quickly realized the lesson from this incident and upgraded the software so that a press of "forward" or "reverse" on the touchscreen is needed to activate the summons command that was set up by a double-press of the park button. This solution is likely a solid fix for a problem that would otherwise reoccur, because it is just too easy to double-press the park button when you intended to single-press it, and the additional step of pushing a button on the touchscreen should be sufficient to avoid accidental activation of summons.

The second accident involved a driver who saw her Model S approaching a stopped vehicle too quickly and (according to Tesla) touched the brake, which turned off the autopilot system. By the time the driver realized a collision was imminent and braked hard, it was too late to stop.
Solutions: Two possible solutions could work together to avoid such an accident. The first would be a refining of the autopilot software to start slowing sooner, in a fashion most drivers would use, when slow or stopped traffic is detected ahead. This solution might be more difficult to implement quickly than imagined, however, because the autopilot's ability to detect the lane the stopped traffic is in becomes more difficult with distance. Perhaps accumulated data regarding the twists and turns of highways can be used in the future to better determine which vehicles ahead are indeed in the driver's lane.
The second possible solution is related to the warning sound made when the autopilot disengages. In an airliner, when the autopilot is disengaged, an obnoxious and loud warning sounds. The good news is that the pilot has an autopilot warning cutout button conveniently-located on his control wheel, and the sound can be silenced in less than a second. The challenge for Tesla is finding the best compromise between alerting the driver of an autopilot disconnect and maintaining some peace within the car during the fairly frequent disconnect events that can occur on a highway with marginal markings. My suggestion is to give the driver some choice as to the loudness of the autopilot-disconnect warning, provide a temporary cut in the music when the warning is played, and giving the Tesla driver a convenient warning-silence button so that you need not listen to the warning for more than half a second. That autopilot-warning silence button should be convenient (which means on the steering wheel somewhere), but implemented in a way that inadvertent silencing is not likely).

The most recent accident involves a driver who was operating on Traffic Aware Cruise Control (TACC) and hit a van that was parked on the side of the road with a good portion of the van extending into the traffic lane. The driver is quoted as saying that TACC worked fine a thousand times before and stopped as needed, why didn't it work this time? The answer is likely that TACC saw the traffic the Tesla was following turn and pass safely by the van and assumed that this traffic was the pertinent traffic to base speed information upon, up until the point where collision with the van became imminent, and at that time the ability to advise the driver to stop and the driver to react quickly enough was insufficient.
Solution: I would categorize this accident as a driver education issue. The driver assumed that the Tesla would stop if an obstacle lay ahead, and the Tesla assumed that since the driver was in charge of steering (lane-keeping was turned off), the driver would avoid the van by swinging around it, as the traffic ahead had done. The traffic ahead could swing into the other lane because of an absence of traffic in that other lane, but the Tesla driver was not so lucky with his traffic situation. As TACC and lane-keeping improve, perhaps TACC will evaluation traffic in the other lane to see if a swing to avoid an obstacle is truly possible. That's not the current state of the software, however, and in this case the driver made assumptions about TACC which weren't correct. The best solution in the near term is driver education so that drivers understand the limitations of TACC. I suspect if the full autopilot including lane-keeping had been engaged, the autopilot would have realized that an obstacle lay ahead and since the autopilot is not permitted to swing into the other lane on its own, the Tesla would have stopped in time. Thus, TACC and autopilot would likely have given different solutions, and drivers need to understand the responsibilities involved in steering the car with TACC engaged.

Summary: Tesla autopilot is still in beta version. Since we all love our gorgeous Teslas and don't want them dinged, we need to anticipate the marginal situations that may be at the limits of what autopilot can do and manually prevent the vehicle from progressing so far into the situation that the outcome becomes questionable. Of the three accidents, I feel the most empathy for the driver of the summons accident, because I could see myself in those same shoes. Fortunately, Tesla has provided a solid fix to that type of setup for an accident, and it need not happen again. Let's stay on our toes and avoid situations which can push our own Teslas into those marginal situations where the outcome is unknown with the current state of development of the product.
 
Last edited:
Good observations. Here's my concern and theory on autopilot, from my post in another thread here after using autopilot a lot on a loaner car in the past 4 days:

I think AP may create a problem with the human mind that I will coin: "Phantom Autopilot". This happens when you use autopilot a lot and you just expect the car to be driving itself, even when AP is not engaged. I don't even think better sound warnings, different graphics, etc. will make much of a difference to combat this phenomenon. I think the human brain gravitates towards patterns and reliance, and once it finds new ones, it expects them to be there, even when they are not. Well, at least my brain does. I found myself expecting the car to react even when AP was off which makes no logical sense, but it did happen and it concerns me. Maybe it's just me?
 
Good observations. Here's my concern and theory on autopilot, from my post in another thread here after using autopilot a lot on a loaner car in the past 4 days:

I think AP may create a problem with the human mind that I will coin: "Phantom Autopilot". This happens when you use autopilot a lot and you just expect the car to be driving itself, even when AP is not engaged. I don't even think better sound warnings, different graphics, etc. will make much of a difference to combat this phenomenon. I think the human brain gravitates towards patterns and reliance, and once it finds new ones, it expects them to be there, even when they are not. Well, at least my brain does. I found myself expecting the car to react even when AP was off which makes no logical sense, but it did happen and it concerns me. Maybe it's just me?

Yes, Canuck, what you describe is really one of the biggest pitfalls of people treating a beta product as if it is a full-fledged autonomous driving solution. I think the solution is that you never put so much trust in the autopilot that you allow your mind to get disengaged. That way, should the autopilot ever disengage without my awareness, I will still catch and respond to any unsafe trends quickly. The game I play when on autopilot is to anticipate the marginal situations and be prepared to take over. I only trust autopilot for a half second at a time, and as long as I stay engaged, it will do me no harm.

That said, I find driving on autopilot relieves a huge amount of the effort in driving. I am far less stressed at the end of a 400 mile trip on autopilot than I would be if hand driving. Keeping the mind engaged and thinking "what if" is far less tiring than making thousands of small corrections to speed and lane keeping.
 
Last edited:
There is nothing beta about TACC. The Tesla has roughly the same implementation of adaptive cruise control as a good number of other manufactures. It has it's strong points and it's weaknesses over other implementation methods.

There isn't anything unique or different to the Tesla when it comes to 2 of the accidents. We've known before the Tesla Autopilot hit the market that radar based adaptive cruise control systems couldn't see a stopped car (one of the accidents), and hitting the brakes is how you disengage adaptive cruise control (at least in North America).

So the question I have is whether there is something about the implementation of the TACC in the Tesla that makes it inherently more prone to user mistakes (which both of these are), or are both of these accidents typical of adaptive cruise control systems in general? I'm not sure, and I'm not sure where I would go to look to find such a specific statistic.

I do feel strongly that adaptive cruise control systems (from all manufactures) should not automatically disengage the braking element of adaptive cruise when the user brakes. Instead the only way to turn it off (once engaged) would be to use the control switch, or to brake and then accelerate. Once you've accelerated then you know it's fully disengaged. I don't have any issues with the current implementation because I use a TACC setting of 5 typically, and at that setting it does start braking fairly far in advanced so there is quite a bit time between when I know I have to take over and it being an emergency situation. The person in California was likely using a really small setting because in California they love driving bumper to bumper at 80mph while also having a motor cycle lane split at 100mph. It's definitely a great proving ground for semiautonomous/autonomous driving technologies, and once they legalize weed it's going to be an even better proving ground.

I don't want to have large obnoxious sounds when turning off adaptive cruise control. But, I do acknowledge the switch off does need to be more transitional since the user is transitioning from being pretty far removed from the driving task to suddenly being thrusted back into it.

Most of all I'd like a warning a couple minutes before approaching a massive slowdown. When I drive I'm usually aware of this since I look for it on the navigation and waze (on the browser). But, it would be nice if the car told me that a slow down was approaching on the instrument cluster.
 
Last edited:
  • Funny
Reactions: GSP
Sr4wrxttcs, how have you arrived at the conclusion that Tesla's radar cannot see a stopped vehicle? My experience doesn't jive with this explanation.

You mention a good point about providing some braking protection even if TACC is deactivated. Again, the tough call for braking is the orientation of the obstacle ahead. On one extreme, if the obstacle is so far to the side that it can be avoided with minimal maneuvering, then aggressive braking would likely cause more accidents than it avoids. On the other hand, if the obstacle is directly ahead of the Tesla and clearly blocking the lane, then aggressive braking would be prudent. The tough call is those situations where the obstacle is extending into the side of the lane and the driver's ability to avoid the obstacle is in question. I would think those situations are still a work in progress and will be for some time. I would think that the van which is partially extending into the lane fell into this grey area as far as TACC is concerned, because the transition from tracking the previous vehicle to providing a collision warning for the van took place with little time to spare.
 
Last edited:
I have adaptive cruise in my current car and I use it extensively. However, it does have its limits. If the car I'm following at speed swerves to avoid a stopped car, the cruise is not able to stop in time and issues a warning beep that I should take over. I view the adaptive cruise as a "driver assist" feature. Even though Autopilot is taking this to a new level, I still think that I will view the tech as a driver assist feature only.

Hence my thought that "Autopilot" is probably misnamed and the name should be reserved for level 4 autonomy. I get that Tesla's version is currently the best in the business and I completely understand the marketing involved. But I do worry that these systems as currently implemented can lull a driver into a false sense of security.

Long and short...I plan on keeping at least one hand on the wheel as recommended.
 
  • Like
Reactions: Papafox
How useful is it to draw parallels from aviation to AP? I'm only an interested amateur, but it seems relevant to me. In aviation automatic systems have contributed to reducing accident rates overall, while at the same time contributing to a small number of tragic accidents. Manufacturers and regulators take those accidents very seriously, which ultimately improves the next generation of automated systems. As a result it's much safer today to travel by air than by road, despite the enormous difference in speeds and potential worst case scenarios.

I think this passage from Improving the Continued Airworthiness of Civil Aircraft doesn't have to be stretched much to apply to AP, and sets the right tone:

Automated features of flight control systems can improve situational awareness by reducing crew workload. However, automated actions that compensate for unusual flight conditions or equipment malfunctions can reduce situational awareness if the automated system masks the presence of abnormalities or does not clearly indicate what actions it is taking in response.

Aircraft must be designed so that, for all situations the flight crew can reasonably be expected to encounter, it will have the data it needs in an easily recognizable form that facilitates proper decision making. Furthermore, the aircraft should be designed to help the flight crew carry out necessary tasks, especially in emergencies when things are not as expected and safety depends on quick and correct actions by the flight crew.​
 
How useful is it to draw parallels from aviation to AP? I'm only an interested amateur, but it seems relevant to me. In aviation automatic systems have contributed to reducing accident rates overall, while at the same time contributing to a small number of tragic accidents. Manufacturers and regulators take those accidents very seriously, which ultimately improves the next generation of automated systems. As a result it's much safer today to travel by air than by road, despite the enormous difference in speeds and potential worst case scenarios.

I think this passage from Improving the Continued Airworthiness of Civil Aircraft doesn't have to be stretched much to apply to AP, and sets the right tone:

Automated features of flight control systems can improve situational awareness by reducing crew workload. However, automated actions that compensate for unusual flight conditions or equipment malfunctions can reduce situational awareness if the automated system masks the presence of abnormalities or does not clearly indicate what actions it is taking in response.

Aircraft must be designed so that, for all situations the flight crew can reasonably be expected to encounter, it will have the data it needs in an easily recognizable form that facilitates proper decision making. Furthermore, the aircraft should be designed to help the flight crew carry out necessary tasks, especially in emergencies when things are not as expected and safety depends on quick and correct actions by the flight crew.​

Good points. I think aviation is the most useful analogy when considering Tesla autopilot issues because autoflight has been used for so long that it has evolved significantly, though sometimes after very painful events. No doubt there are cases of autopilot or some function of autopilot tripping off and an accident resulted because the crew did not notice. The Eastern Airlines Flight 401 crash into the everglades in 1972 comes to mind as the plane slowly descended under control to its impact. There's no reason for the automotive industry to repeat the mistakes of aviation. If the designers of Tesla autopilot can draw parallels with aviation events, it can put in safeguards to this new technology before the need for such safeguards has been demonstrated through bent metal.
 
Good points. I think aviation is the most useful analogy when considering Tesla autopilot issues because autoflight has been used for so long that it has evolved significantly....

How interesting that people think that aircraft Autopilot is an analogy to Tesla Autopilot, and yet completely miss that a pilot is required to have many hours of training, real and simulated, on all instruments in the cockpit, while a Tesla driver (or any other car brand) simply need to have a current license, which at one time in the past months or years, only required a few very simple questions and a ten minute ride with a DMV person. There's the problem. No brain required. One hour quick talk by a DS does NOT make you qualified. Most (my daughter) don't even remember what the DS said.
 
Accident #2 is interesting because I've seen the TACC, and the ACC in my BMW, behave very conservatively in that sort of situation, often slowing abruptly when a car, typically stopped or slowly making a left turn, has any portion of the car in my lane. I can see how this driver expected the car to stop, and from the video I saw it hit the parked car pretty softly and was going pretty slow. That one is a bit of a mystery.

Accident #3 was pure pilot error. I think that driver instinctively touched the brake pedal, failed to realize that TACC was now disabled, and allowed the car to carry on too long; essentially the "phantom AP" phenomena suggested by Canuck.

Overall I think we're going to see more of these accidents. When auto-steering is engaged, the driver has to sit back a bit, we can't grab the wheel every time the car wanders a bit. But stepping back a bit means we have to draw a line regarding when we will re-engage and take control, and that line will be in different places for different drivers. For those who allow it too long a leash, there are going to be collisions. At the same time there will probably be a few "saves" where the human driver may not have reacted quickly enough and the AP does.

There is no question in my mine that Tesla has overplayed the feature by calling it "autopilot". On my boat, engaging the autopilot can absolutely be safer than steering manually. It is much easier for a human to get disoriented, read instruments incorrectly and do the wrong thing that a bunch of sensors. I would rather have the autopilot steer the boat in bad weather at night, than a human. There are many stories of injuries due to accidental jibes (unintended change of direction which causes the boom to change sides on the boat) at night in tough conditions. But if I tell the boat AP to steer a relative angle to the wind, it will do it and hold a course relative to the wind better than 99.9% of the human helmsmen. It will never get confused about where the boat is in relationship to the wind.

My experience with the Tesla AP is mostly the opposite, if the conditions get dodgy, it's not to be trusted. I quickly dump the AP an drive the car myself. In the right situations it's very reliable and really makes for some relaxing driving, in others it just needs to be shut off.

Make no mistake, the software is in beta and we are beta testers. I believe it will always be in beta on our cars and taking the AP to the next level will require better sensors, better processors, and better software, i.e., new hardware and software. I just hope Tesla will give us an upgrade path to buy and install the hardware.
 
The thing that bothers me most is fact that so many people do not understand the concept of beta and the difference between driver assistance and autonomous, so we all risk losing what we have now. :(

Every Tesla owner had to acknowledge that they understand that autopilot is still beta before using it but I think most are used to signing off on things like that without really comprehending the consequences. I don't know the answer because there are folks here complaining about the nags and limitations that were added since it was introduced. Perhaps a mandatory class on using autopilot should be required before a Tesla owner is even allowed to turn it on.

Although I "get it", it took a few uneasy moments, twiddling with settings (settled on distance-7 and 5 mph over the limit), and learning when to turn it on and off. Still, I never let down my guard. It is a new way of driving and I don't want to go back. Autopilot is a feature that improves the driving experience where normally it would be either boring or stressful.
 
  • Like
Reactions: Rhyder
I wish they wouldn't have called the feature "autopilot", as many people confuse that phrase with autonomous driving. I use the feature quite a bit when it's appropriate. The best way to understand what this feature is, is to go back to the P85D intro where Musk. It's at about 11:40 of this video.
. Musk says it's not autonomy, it's "active safety".

When you think of it this way, you see that people who are just taking their hands off the wheel and not paying attention are not only ignoring the fact that this is a beta system, it's simply not intended to be "autopilot". I know there's a lot more marketing pizzazz with the term "autopilot", but people need to see this as active safety and use it as such.
 
  • Informative
Reactions: dhanson865
I'm noticing creep when I put it in reverse that didn't seem to be there before, and more active rear corner sensors. Freeway auto steer was flawless, but it tends to work really well in the 880 corridor around the Tesla plant. It's actually behaved very well on most freeways over the past few days. I'm wondering how much it's using all the mapping and driving data. I also use auto steer a lot on route 238 (Mission Blvd) and it works well, but in busy traffic, I watch it closely. I've auto steered Niles Canyon (state route 84) but had to intervene on the tight radius turns near the Fremont side overpasses. I was amazed it would do that well. Need to try again with new firmware and see if we're getting better.
 
Has anyone else noticed that on the most recent update, Autopilot is aware sooner of cars entering the lane you are traveling in? The result is a more gradual slowdown and earlier warning to cars that may end up in your path.

Yes, same observation and am deliberating on whether it is combined with a greater follow distance. The van crash may have made engineers want to expand the gap. That's my theory. I'm concerned about being cut off more frequently, now.

Going to go a bit OT, but one way to become a better performance driver is to "see your speed" so you can judge brake zones, or an autocross course. Some people seem to be born with good depth perception, or they learned how to combine it with their sense of speed. Others don't. How do you create AP for both of these people?
 
Autopilot is a great name for people who understand what autopilot means in an airplane. Unfortunately it seems a lot of the general public seems to think autopilot = autonomous.

I don't see Tesla changing the name now. Eventually Tesla will get to an autonomous car, so the name will probably stick.
 
  • Helpful
Reactions: SW2Fiddler
From your lips to God's ears. If they don't people will just keep having wrecks in AP1.0 cars. Some will die. Hoping Tesla sees this and feels a responsibility to save these lives.

I was hoping the S60's going into the CPO program was a taste of things to come. The supercharging was unlocked after trade-in. So in other words, as cars go into CPO, they are retrofitted with AP 2.0 to be re-sold.
 
I believe it will always be in beta on our cars and taking the AP to the next level will require better sensors, better processors, and better software, i.e., new hardware and software. I just hope Tesla will give us an upgrade path to buy and install the hardware.
I rate the chances of that as essentially zero. Whatever the next generation of AP sensors turns out to be, there will be significantly more and different sensors than are currently on the cars, which means a different wiring harness and different external body parts for mounting the new sensors.

When AP V1 debuted in September 2014, Tesla did not offer an upgrade path for cars built before that date.