Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Blog Tesla Releases Data on Utah Autopilot Crash

This site may earn commission on affiliate links.
Last week, a woman in Utah crashed her Autopilot-enabled Tesla Model S into the back of a parked fire truck at 60 mph. The car was totaled, but the woman escaped with only a broken ankle.

During an investigation of the crash, the woman admitted that she was looking at her phone during the accident. In addition to local law enforcement, the crash is also under investigation by the National Highway Traffic Safety Administration.

Tesla agreed to cooperate with investigators and on Wednesday, the South Jordan Police Department shared details from data recovered on the car’s computer.

Technicians from Tesla successfully recovered the data from the vehicle. According to Tesla’s

report, shared in a press release from the police department, the vehicle indicated:



The driver engaged Autosteer and Traffic Aware Cruise Control on multiple occasions

during this drive cycle. She repeatedly cancelled and then re-engaged these features, and

regularly adjusted the vehicle’s cruising speed.

Drivers are repeatedly advised Autopilot features do not make Tesla vehicles

“autonomous” and that the driver absolutely must remain vigilant with their eyes on the

road, hands on the wheel and they must be prepared to take any and all action necessary

to avoid hazards on the road.

The vehicle registered more than a dozen instances of her hands being off the steering

wheel in this drive cycle. On two such occasions, she had her hands off the wheel for

more than one minute each time and her hands came back on only after a visual alert

was provided. Each time she put her hands back on the wheel, she took them back off the

wheel after a few seconds.

About 1 minute and 22 seconds before the crash, she re-enabled Autosteer and Cruise

Control, and then, within two seconds, took her hands off the steering wheel again. She

did not touch the steering wheel for the next 80 seconds until the crash happened; this is

consistent with her admission that she was looking at her phone at the time.

The vehicle was traveling at about 60 mph when the crash happened. This is the speed

the driver selected.

The driver manually pressed the vehicle brake pedal fractions of a second prior to the

crash.

Contrary to the proper use of Autopilot, the driver did not pay attention to the road at all

times, did not keep her hands on the steering wheel, and she used it on a street with no

center median and with stoplight controlled intersections.



Police said the driver of the Tesla was issued a traffic citation for failure to keep proper lookout under South Jordan City municipal code 10.28.030 (traffic infraction).

“As a reminder for drivers of semi-autonomous vehicles, it is the driver’s responsibility to stay

alert, drive safely, and be in control of the vehicle at all times,” the release said. “Tesla makes it clear that drivers should always watch the road in front of them and be prepared to take corrective actions. Failure to do so can result in serious injury or death.”

NHTSA continues to conduct their own review of this incident.

 
Last edited by a moderator:
I thought AP1 did not have a radar?

Yes. It is located at the lower front grille where rain, snow, ice, and in this case, a moth would like to hang around:

57337282dd0895d65f8b4785-750.jpg'
 
So 60 mph on an undivided highway with stop lights, on autopilot. "I'm in a big hurry, but I need to do stuff on my phone so I sure am glad autopilot will take care of driving for me..." (NOT)

But still: why did no braking occur if emergency braking was enabled?
 
  • Helpful
Reactions: 1 person
Take a look at the antenna pattern on page 10 of http://www.ti.com/lit/ug/tidudq6/tidudq6.pdf. You will see that the vertical resolution is about 35 degrees and horizontally it is about 80 degrees. At a distance of 60 feet, this gives you a pixel that is 36 feet tall and 84 feet wide.That is a whole lot bigger than a fire truck (thus no "vertical step change" in that case, quite a gradual blob in fact). It would certainly encompass any overhead signs or signal poles. That's at 60 feet and of course it gets bigger in direct proportion to the distance. At 60 MPH, you will travel 60 feet in 0.682 seconds and the stopping distance for the model S at 60 MPH is 118 feet (maximum braking) so to avoid this collision you must completely recognize the danger at at least 118 feet. At that distance the pixels are 71 feet by 165 feet.

Consider another common scenario: stopped cars ahead in the lane to the right and to the left but none directly in front. Scan your radar across this (say 100 feet ahead) and you will conclude all lanes are blocked. Slam on the brakes?

Hopefully you can see the challenges with false alarms for stationary objects as far as radar goes because the world is full of stationary objects and the probability of very large metal ones in the antenna side lobes is quite high and the main lobe is also necessarily quite large. It is a fact that clutter is a problem for all radars. For moving objects which can be distinguished by Doppler shift it is much, much easier because the clutter goes away. Cameras have the potential to solve this and lidar certainly can. Tesla's camera system is good at recognizing most stopped vehicles, but far from 100% today, but it is improving.
That is describing the field of view of the radar. The multiple beam scanning happens within that field of view.

This is a good overview of how these systems work.
AWR1243 sensor: Highly integrated 76–81-GHz radar front-endfor emerging ADAS app Analog and Mixed-Signal spyy003 - TI.com
Resolution can be about 1 degree horizontally and 14 degrees vertically depending on the design. At 40 m the horizontal resolution is about 70 cm.

I have no idea what the actual hardware in Teslas is, but this is typical of automotive radar.
 
  • Informative
Reactions: jgs
Amazing complaints here. This is an L2 level feature, not L3, L4, or L5. If the algorithm did not have any false positives or false negatives, it wouldn't be only L2.

The fact that the sensors & algorithms accuracy requires a tradeoff between false positive and false negative events is not surprising at all. If you want to argue that L2 should not be legal because people will violate proper & safe usage, fine that is a regulatory matter not a Tesla matter.
 
  • Like
Reactions: Sambas
I know, but given the number of cars on AP that ran into stationary trucks recently, obviously they not there (I don't buy any BS that Tesla cannot release the ability to tell a parked fire-truck because of regulatory approvals).

Well, they obviously need the ability to tell the difference for FSD (when it gets released). I don’t know where they are at in terms of releasing this “ability” for EAP.
 
Well, they obviously need the ability to tell the difference for FSD. I don’t know where they are at in terms of releasing this “ability” for EAP.
Maybe there should be a big warning whenever one enables auto-pilot stating that the car can't tell a soda can apart from a fire-truck, hence it will drive into parked trucks, or concrete medians, when using AP.
 
why did no braking occur if emergency braking was enabled?

I'm no expert, and would appreciate correction of anything I get wrong, but my understanding is:

AEB is always "engaged" ...

AP cannot see a stationary object, so if the Tesla was following another car, and that car moved to another lane (e.g. that driver having seen the Firetruck blocking the lane), that would then "reveal" the stationary fire truck to Tesla AP, but AP would do nothing about it (AP not being capable to tell the difference between stationary firetruck and any other road-furniture).

If AEB did react to it then that would not happen "until an impact was inevitable" and as such is not intended to avoid the accident, but rather to lessen the impact. That may have happened (and the driver then took over and braked themselves - which I also understand to be necessary as AEB will not carry on braking [to a stop / below a certain speed], so that "final" braking is required to be done by the driver).

Whether we can deduce that AEB did not work, or did not start braking soon enough, given the damage to the vehicle, I don't know, but if the driver was distracted / Texting (as they have admitted) then maybe the only reason they looked up, and started braking, was because AEB (or maybe even AP) was already slowing the car dramatically.

If, OTOH, a Tesla was following a fire truck, on AP, and the fire truck THEN slowed to a stop, then AP would do likewise.
 
The max speed for Tesla AEB to start braking is when the vehicle is traveling at a speed up to 90mph. Euro NCAP's AEB Inter-urban calls for a max speed of 50mph, and I think NHTSA will follow the max. I found Euro NCAP had listed a 2014 Tesla Model S, but at that time AEB wasn't available so not tested. And NHSTA ever studied AEB performance (with a 2015 Model S) but it only tested up to 45mph and the result wasn't perfect--besides, two thirds of the tests turned on Autopilot but at <45mph, Autopilot alone can slow down and stop the car effectively without invoking AEB (AP's max relative tracking speed is 50mph).

Who has or will test and certify a 90mph AEB, Consumer Reports or the owners.
 
This warning is from Page 75 in the Model S Owner's Manual...

"Warning: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles or objects, especially in situations when you are driving over 50 mph (80 km/h) and in situations where a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you. Always pay attention to the road ahead and stay prepared to take immediate corrective action. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death. In addition, Traffic-Aware Cruise Control may react to vehicles or objects that either do not exist or are not in the lane of travel, causing Model S to slow down unnecessarily or inappropriately."
I highlighted in RED the parts I have a question about.

My experience is that since 2018.10.4 the ability to STOP for stationary vehicles has greatly improved (works 99%). WHY? I have the same hardware. So of course it must be a software improvement. RADAR should not have anything to do with the improvement. So assuming this car was on 10.4 or later then what else would have prevented the car from stopping. SPEED? The car is reported to be going > 50mph (reported as 60mph).

I know that 10.4 greatly improved the stopping when driving < 50mph (my case is mostly < 45mph). Is the problem with doing the same at 60mph (the case with this accident) related to RADAR/Camera or CPU SPEED? Are those of you that feel the problem is with the hardware (RADAR and/or Camera) are you saying it is only a problem at > 50mph and if this driver was say driving 45mph the car would have probably stopped?
 
  • Like
Reactions: Kant.Ing
Maybe there should be a big warning whenever one enables auto-pilot stating that the car can't tell a soda can apart from a fire-truck, hence it will drive into parked trucks, or concrete medians, when using AP.

I get the "false positive" argument with radar, but Tesla has a bunch of cameras, too. I think a fifth grader could probably write the code to match a picture of a firetruck with something that looks the same that's standing still while the car is approaching it at a high rate of speed. If they can draw reindeer on the dashboard, surely someone at Tesla has the skill set to figure this out.
 
I get the "false positive" argument with radar, but Tesla has a bunch of cameras, too. I think a fifth grader could probably write the code to match a picture of a firetruck with something that looks the same that's standing still while the car is approaching it at a high rate of speed. If they can draw reindeer on the dashboard, surely someone at Tesla has the skill set to figure this out.
While on the surface I can see how such an argument would appear to make sense, it is completely flawed. It's like saying a toddler can walk, climb, jump, so it should be trivial to make a robot do the same. It's not. If you really think you could create such a vision system, do it, you will be really rich when you sell it.

My point was different by the way, it was that Tesla should give a plain warning UNTIL they can get their system to work.
 
So 60 mph on an undivided highway with stop lights, on autopilot. "I'm in a big hurry, but I need to do stuff on my phone so I sure am glad autopilot will take care of driving for me..." (NOT)

But still: why did no braking occur if emergency braking was enabled?
Read the manuals of other systems. You see the same issue. From the 2018 Infiniti Q70 owner’s manual:

The radar sensor will not detect the follow- ing objects:
. Stationary and slow moving vehicles

The Doppler shift for the road is the same as for the stopped vehicle. It is a problem of missed positives vs. false positives. Imagine if every time you approached a hill, the car started to brake automatically.
 
  • Informative
  • Like
Reactions: ThosEM and EinSV
Maybe there should be a big warning whenever one enables auto-pilot stating that the car can't tell a soda can apart from a fire-truck, hence it will drive into parked trucks, or concrete medians, when using AP.

If that’s the case, the same should be said about all other manufacturers. Others have worse ranked systems and more false positIves. We hear about the few Tesla crashes but yet totally are oblivious to the other car crashes.

Guide to Automatic Emergency Braking

Besides...what good would another warning label do if no one reads the warnings already glaringly obvious when enabling AP?

Sorry to say but no matter what is done, there will always be some idiot exploiting the system. If you want to try and run a business that builds something like this and try to deal with an absolute zero percent chance of failure, good luck with that.
 
Last edited:
  • Like
Reactions: EinSV and jgs
While on the surface I can see how such an argument would appear to make sense, it is completely flawed. It's like saying a toddler can walk, climb, jump, so it should be trivial to make a robot do the same. It's not. If you really think you could create such a vision system, do it, you will be really rich when you sell it.

It's your analogy that's problematic. Nobody was comparing the human brain to a robot. I was suggesting the use of a camera to supplement the limited capability of Tesla's radar. You are aware of facial recognition software, correct? One would think it might be easier to pick out a stopped fire truck or other vehicle using a camera. The code probably already has been written. If you then match that against the possibly false positive produced by the radar, the decision tree for Tesla's emergency braking algorithm should be fairly obvious: fire truck, stop; hilltop, keep on truckin'.
 
You are aware of facial recognition software, correct?
Ah yes, facial recognition is a solved problem.
One would think it might be easier to pick out a stopped fire truck or other vehicle using a camera.
One might think so, although my bet is that one is less and less likely to think so the more one's professional expertise in real-time image processing increases. And as someone points out upthread, once the system can do this with extremely high accuracy, it'll be most of the way to the unicorn of full self driving. But so far, no unicorn.
 
  • Informative
Reactions: dhanson865
Well, we will then have to respectfully disagree or at least agree than there are just as many differences flying around in the middle of Texas than near Atlanta as there is driving in North Dakota vs. NYC. If you think you have minutes to avoid a mid air collision with a small plane that isn't on Air Traffic Control's radar, I would highly advise you rethink your see-and-avoid strategy. This really applies anywhere, even though less likely in less populous areas.

I have flown planes and helicopters for nearly 15 years near military operations areas, tour helicopters, banner towers, general aviation (including lots of student aviation), and while my non-flying pilot can tune radios and not always be looking outside, the flying pilot should have hands on controls and eyes outside at all times. Not doing so can (and has) lead to disastrous consequences. I also fly single pilot, and that means I use autopilot to the max extend possible, which is also something Autopilot in a Tesla is great at... helping to maintain lanes if/when you need to glance away momentarily... NOT read Facebook or text message on your phone.