Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Another tragic fatality with a semi in Florida. This time a Model 3

This site may earn commission on affiliate links.
Status
Not open for further replies.
Lex Fridman's work at M.I.T showed that Tesla drivers are more attentive while on autopilot

This would be me, but not just more attentive, more nervous? something - I find that the car still does not anticipate like I would want it to (like I try to do), it still sees cars in front of it that are not there (queue dancing cars also) and can not see (based on what it shows on the screen) as far forward as I can nor does it yet include the same experience I have in decision making eg I see a car with a driver looking down (phone? food?) and automatically mark that car as something to keep an extra eye on..

To me, it still comes down to this is not yet a self driving car and inattentive drivers or misunderstanding of the nouns (autopilot etc), or both, will probably continue to lead to more tragedies down the road.

I will love the day when I can have the car take me cross country with little or no input from me but I'm also pessimistic it will happen 'soon'. Paying attention to your own car and all your surroundings is still the safest path *I think* at this time; the software and other devices are aids, not replacements - the driver must take the task of driving seriously. Sadly this seems to escape too many drivers, way before autopilot; people still engage in ill-advised behavior while driving, clever cars or not.

All due sympathies to those who had experience recent losses; sadly, every day route 17 near me has numbers of possibly avoidable accidents, some quite serious indeed but aside from showing up in the route 17 traffic twitter feed, they are all too 'normal' an occurrence and don't show up much outside the general San Jose/Santa Cruz area news, if that.
 
While Tesla works to refine its self-driving capabilities, they are in parallel working to reduce risks caused by errors made by the drivers when they are in control of the vehicle.

We have seen several videos posted recently demonstrating this capability during lane-drift situations (Tesla moves slightly to the right or left to avoid collision with a vehicle entering its lane) - regardless of whether or not autosteering is enabled, or configured. The new lane assist features are also an example of this. There is almost an unlimited number of scenarios Tesla could enhance functionality.

If multiple sensors (thereby reducing false positive detection) identify obstacles, why should the car, for example, allow 100% forward acceleration and power to the tires? This is common when the operator confuses the accelerator with the brake - one such case here: Tesla Model 3 Crashes Into Dry Cleaners, Does Major Damage: Video where a Tesla smashed through a storefront.

One of the most challenging areas we have to deal with is what happens when a user does, in fact, avert their attention for some time. We know that there is a delay as a user re-orients themselves back to an observant state. The outcomes of a user reacting to a dangerous situation if they are also required to re-orient themselves can be fatal.

There is also the problem with 'two drivers' behind the wheel who may be in conflict with one another. As it relates to steering, the driver has primary control of the vehicle, and if they override a decision by autopilot, they may be increasing risk, rather than decreasing risk. There have not been many documented cases like this yet, but I often wonder whether or not I may override autopilot at a time when it is making a sudden steering change for crash avoidance. I have become trained in a sense, to favor my judgment over autopilot, but in the future, its decisions may be better than my own.
 
  • Like
Reactions: OPRCE and jerry33
Two other thoughts come to mind as it relates to the future of autonomous driving. The first would be a potential near-term solution to the problem in this case, and one that seems self-evident.

Although I do not have any specific information regarding the state of the sensors during this accident, I suspect that one of the reasons Tesla has not implemented braking is because of false positive (detecting objects crossing the lane that are not in fact there) or false negative detection (not detecting obstructions with enough confidence to enforce braking). In this case, why not throw up an alarm on the console to bring the users attention to the obstacle in the same way it does when approaching a vehicle with too much speed?

In the long-term, Tesla could work with smart-city initiatives to try and correlate data from both their onboard vehicle sensors, as well as stationary sensors on roadways. Think of the advantages that would emerge if Tesla also was able to leverage data from sensors in the surrounding area. A simple example would be at intersections, whereby stationary sensors could identify whether or not an intersection is clear, what the actual signal state is, and be able to adjust for traffic flow conditions based on time of day or other factors. A Tesla smart-city initiative may include things like smart-sensors for traffic signals, solar and energy storage, as well as roadway off-loading (into boring tunnels).
 
1. Musk's intensely relaxed reaction to the current situation indicates he will be content to accept this wastage rate of customers as an acceptable sacrifice, indeed he seems to argue on the [I would say tendentiously manipulated] statistics he has released that it is a necessary one to power his march towards the inevitable triumph of FSD.

2. However, it is by no means assured when or even if the envisioned >=L4 FSD will arrive, whereas it is becoming clear that, all other things remaining equal, e.g. no news of sensors improved over HW2.5 in the recently announced Model Y, the toll exacted by AP will in all probability continue to climb proportionately with the rapidly expanding fleet.

3. Whether the promised HW3 computer update can somehow compensate for the apparent sensor deficiency is an open question on which the reasons to not be overly-optimistic about the outcome are already outlined above.

4. Further it would be prudent business practice to presume that regulators inspired/captured by a cartel of establishment Tesla competitors will never accept Musk's mere argumentum ad statisticum as demonstrating sufficient AP/FSD safety, but will quite rightly in the public interest insist that AVs to be approved for >=L3 SAE must first pass a phalanx of standard safety tests such as those being developed by Thatcham, and across the gamut of speeds available.

5. But before ever getting to that stage (around Jan 2023 by my reckoning) a proper legal test-case and/or regulatory investigation following another fatality may intervene to force a recall for sensor upgrade or the deactivation of existing L2 features, such as AP engagement at any speed above that from which it safely stops for a cut-out into stationary obstacle just beyond the minimal braking distance.

6. I do not see Tesla's plan to mitigate the risk/liability in points 4&5.

7. Withstanding all the happy-talk, AP remains a glorified cruise control and those who keep this fact to the fore are more likely to survive its use long enough to see the contrary demonstrated.

PS: I would appreciate it if those disagreeing would have a genuine stab at raising cogent counter-points before issuing their down-vote. It's the better way to advance a discussion, thanks.


I’d also like to take issue with the obscene notion that Tesla is flippant about Autopilot deaths.

I’ve worked for a large automaker and when a safety issue comes up it hits hard because you have one of these products yourself. But in addition, your family and friends and coworkers disproportionately drive products from the same company. The people you love and see every day and their families can be dramatically impacted if something goes wrong. If you unintentionally had a hand in what went wrong it would be a lifetime of pain and sadness to know you had a part in someone losing their life.

To imply that the people developing this system are in any way relaxed about deaths using their products is simply revolting.

1. It is not an obscene notion but one based on my outsider's observation of Tesla's effective inaction over 3 years towards presenting a robust solution to this treacherous perception gap which still allows AP to drive into stationary objects at high speed.

2. I've also worked (as sub-contractor) for numerous major automakers and can attest from rich and hair-raising personal experiences that they have highly variable attitudes towards safety, whether as regards the plant's machinery itself or in QC of the resultant product. In my estimation on the latter aspect Tesla does less than impress.

3. I specifically mentioned Musk's apparently cavalier attitude but did not mean to suggest that this transfers to anyone else involved. In fact I well imagine that e.g. the AP team members who advocated for an IR cam-based DAMS and were over-ruled must feel pretty sick about the accumulating results.

4. I hope as much as anyone that HW3/FSD will finally resolve these problems but that does not negate that shortcuts have been taken to getting there.
 
  • Like
Reactions: Leafdriver333
We need to wait for the final report, but there are a few things to note in what they have said so far:

1) He activated AutoPilot 10 seconds before the accident going 68 mph, which was 13 mph over the limit. Presumably he didn’t see a semi in the road when he engaged the system.

2) At that speed he would have gone about 300 yards in ten seconds, and would have needed about 5 seconds to react to the event and bring the car to a stop. Thus he and/or AutoPilot would have needed to see the truck within a few seconds of activation to stop in time.

3) We don’t know when during this time the truck pulled into the road or at what point it would have been visible to AutoPilot or an attentive driver. The fact that the car was speeding may have caused the truck driver to misjudge the time he had to clear the intersection or to see the oncoming car assuming he could a ways down the road.

4) If the trailer had side protection to prevent cars from going underneath, neither this nor the earlier similar accident would likely have been fatal, but yet we refuse to require this, as other countries do, to save costs to the industry. This might have also made it clearer to AutoPilot that the lane was blocked. The 101 accident should also not have been fatal if he crash barrier had been replaced following an earlier accident. Better to avoid accidents, but these accidents didn’t need to be fatal.

5) It took a series of errors for these fatal accidents to occur, and prevention of any one of these could have saved a life. There will always be accidents and no system can prevent all accidents. At this point everyone should realize that driver assist systems need to be monitored, and if you aren’t willing or able to do that then you shouldn’t use the systems. Tesla by allowing its system to be used in more situations puts more onus on drivers to use it wisely and monitor it properly, but also provides more utility to more drivers. Perhaps Tesla could add a mode for someone to select a geofenced, speed limited, ... version of ADAS that are similar to other systems that would reduce the chance of mistakes and fatalities, but I wonder how many drivers would prefer that system to the current one. No one is forced to use AutoPilot and they shouldn’t right now unless they are willing to monitor it properly.

6) Perhaps Tesla and other automakers should make short instructional in-car videos that explains how to use the system and its limitations. New owners would be required to watch the video and it would be available for any driver to view before using the car. These systems are complicated and each are somewhat unique. Not all drivers have educated themselves as much about them as many of us here, so I would be in favor of Tesla and other companies doing more to help drivers educate themselves.
 
I have not read through this entire thread, but I don’t see discussions on HW3 or iihs so here it goes.

About 300 side strikes of tractor trailors end in fatalities every year (first column in chart). In the last 3 years I have heard about 2 of those approx 900. That’s 0.2%. That 0.2% is AP Tesla.

Iihs has a cheap retrofit solution that can reduce this. I am counting on HW3 fixing this for Tesla because of the 3D mapping it can do that HW2.5 can not (can’t do stoplights/stopsigns/intersection turns either). So for Teslas HW3+ I think the solution is coming, but what about everyone else? That is what the story shold be.

IIHS tests show benefits of side underride guards for semitrailers

Passenger vehicle occupant deaths in 2-vehicle crashes with tractor-trailers

Year Passenger vehicle strikes side of tractor-trailerPassenger vehicle strikes rear of tractor-trailer All crashes with tractor-trailers
2005 441 258 1,932
2006 394 260 1,853
2007 417 218 1,771
2008 290 180 1,526
2009 269 174 1,237
2010 319 181 1,417
2011 246 189 1,362
2012 306 216 1,376
2013 274 213 1,377
2014 308 220 1,409
2015 301 292 1,542

Has anyone taken their 3 out and recorded what the car interprets while approaching a trailer from the side? Not full on AP, just driving tword it.
 
4. I hope as much as anyone that HW3/FSD will finally resolve these problems but that does not negate that shortcuts have been taken to getting there.

Tesla certainly is making a multi-front gamble here. Elon has readily stated that creating a test environment that is close enough to the physical reality is more of a challenge than developing FSD. His position, therefore, is to make use of real-world data, using real-world drivers, while setting an expectation that L2 autonomy (and presumably the risks of users misusing Tesla's L2 system) will be short-lived (less than one year on his clock according to his current estimates). He may even logically take the position that the number of deaths incurred during this process will be considerably less than the number of deaths that can be avoided if his accelerated program is permitted to continue unabated. What is unclear is whether or not regulators in the regions Tesla is selling will continue to agree to this. China, as an example, has been known to allow for some measure of development without regulation, but even they have limits. On the other end of the spectrum, we find that Europe is a highly regulated marketplace, and has spent far more time thinking and developing policies to assure safety. I will not be shocked if Tesla's program gets frozen in one of their markets before we see the next milestone towards Teslas FSD.
 
  • Like
Reactions: Kant.Ing and OPRCE
About 300 side strikes of tractor trailors end in fatalities every year (first column in chart). In the last 3 years I have heard about 2 of those approx 900. That’s 0.2%. That 0.2% is AP Tesla.

Iihs has a cheap retrofit solution that can reduce this.

It is unlikely that without regulation there will be any meaningful change to tractor trailor deployments. It may be that the statistics are considered "too low" to follow with an enforcement action. There are plenty of regulatory bodies that review these kinds of issues, and as far as I know, there has been no change.
 
Tesla certainly is making a multi-front gamble here. Elon has readily stated that creating a test environment that is close enough to the physical reality is more of a challenge than developing FSD. His position, therefore, is to make use of real-world data, using real-world drivers, while setting an expectation that L2 autonomy (and presumably the risks of users misusing Tesla's L2 system) will be short-lived (less than one year on his clock according to his current estimates). He may even logically take the position that the number of deaths incurred during this process will be considerably less than the number of deaths that can be avoided if his accelerated program is permitted to continue unabated. What is unclear is whether or not regulators in the regions Tesla is selling will continue to agree to this. China, as an example, has been known to allow for some measure of development without regulation, but even they have limits. On the other end of the spectrum, we find that Europe is a highly regulated marketplace, and has spent far more time thinking and developing policies to assure safety. I will not be shocked if Tesla's program gets frozen in one of their markets before we see the next milestone towards Teslas FSD.

Yes, Europe is the place this is most likely to happen and it seems Tesla have already begun to pre-emptively tighten up releases over here in order to remain on the right side of regulatory compliance: https://electrek.co/2019/05/17/tesla-nerfs-autopilot-europe-regulations/

The question arises as to whether they will in future need to apply for a new EU type-approval for each software update, which would cramp their speed but should help quality.
 
  • Like
Reactions: GolanB
Yes, Europe is the place this is most likely to happen and it seems Tesla have already begun to pre-emptively tighten up releases over here in order to remain on the right side of regulatory compliance: Tesla nerfs Autopilot in Europe due to new regulations

The question arises as to whether they will in future need to apply for a new EU type-approval for each software update, which would cramp their speed but should help quality.

It is difficult to imagine what third-party analysis and approval may entail for software updates that occur as frequently as they do with Tesla. Assuming an accelerated ninety-day approval process were to emerge, this would reduce the number of Tesla feature updates, and may even reduce the rate of bug-fix updates. Knowing the way regulators work, they'd want to test each Model independently, and if Tesla ends up branching their NN between HW3 and earlier releases, it will only add more complexity. Because software updates have shown to change the behavior of Tesla's fleet dramatically, it would follow that these will be accounted for during testing.

What is also unclear to me is whether or not Tesla can game the system with a limited number of test criteria - either passing the test effortlessly because of the limited number of possibilities or by identifying the test itself, and reacting accordingly (here there is no practical difference).

There is a long trail of auto manufacturers who've changed the way the vehicle reacts under test conditions, and so far as I know, none of their CEO's have survived the challenge. Tesla must be extremely cautious here.
 
you mean like this?

green on Twitter


If this is not a classic case of AI identification failure, I don't know what is! In a sense, the truck could be interpreted as a stationary overpass - which is perfectly reasonable, so long as it also identifies its clearance as being under a meter in height. :(


Screen Shot 2019-05-19 at 12.06.31 PM.png
 

Attachments

  • Screen Shot 2019-05-19 at 12.06.31 PM.png
    Screen Shot 2019-05-19 at 12.06.31 PM.png
    1,006.3 KB · Views: 50
It is difficult to imagine what third-party analysis and approval may entail for software updates that occur as frequently as they do with Tesla. Assuming an accelerated ninety-day approval process were to emerge, this would reduce the number of Tesla feature updates, and may even reduce the rate of bug-fix updates. Knowing the way regulators work, they'd want to test each Model independently, and if Tesla ends up branching their NN between HW3 and earlier releases, it will only add more complexity. Because software updates have shown to change the behavior of Tesla's fleet dramatically, it would follow that these will be accounted for during testing.

What is also unclear to me is whether or not Tesla can game the system with a limited number of test criteria - either passing the test effortlessly because of the limited number of possibilities or by identifying the test itself, and reacting accordingly (here there is no practical difference).

There is a long trail of auto manufacturers who've changed the way the vehicle reacts under test conditions, and so far as I know, none of their CEO's have survived the challenge. Tesla must be extremely cautious here.

AFAICT Tesla's AP has probably never been rigorously tested by EU regulators, as the standards are still being defined:
Why don't European Model 3s have Autopilot?

And another related article explains that the same arcane buried regs have apparently delayed Audi's launch of its A8 L3 Traffic Jam Pilot as scheduled:
Tesla Limits Autopilot In Europe Due To New UN/ECE Regulations
 
*sigh* Late to the conversation here.. so apologies if this is just redundant. Stopped in to see what actual Telsa owners are saying about this. I hate that this happened, and my sincere condolences go out to that person and his family.
Headlines scream 3 deaths, people who have no clue how it works or its benefits/limitations enraged that anyone would want a "self driving car".
I drive a lot and use it extensively. It has helped me so much, and I'd be really bummed if there was some kind of disabling that came as a result of what's essentially a glitch that happened at a bad time where the user also erred. It would be all too easy to limit AP to only being active on certain highways. Those cross-highway roads are terrible, and require full attention for this exact reason.
 
  • Like
Reactions: mtndrew1 and OPRCE
If this is not a classic case of AI identification failure, I don't know what is! In a sense, the truck could be interpreted as a stationary overpass - which is perfectly reasonable, so long as it also identifies its clearance as being under a meter in height. :(


View attachment 409542

1. It's not exactly taken as an overhead structure when actually correctly labelled as "Truck". With that much info at this stage the system should be smart enough to never try to go under it.

2. It's equally significant that the label beneath the bounding box reads "No Rad Sig", even at the close range and slow speed of VG's vehicle, which reinforces my feeling that at high speed as in Banner's case the radar data would similarly show nothing.

3. Thus it seems label #1 was arrived at by processing of vision inputs only, and in Banner's case either the visual recognition simply failed entirely or maybe could not work fast enough on tapped-out HW2.5 at 68mph to produce any useful alert before the collision. In either case the radar was apparently useless as a redundant sensor.
 
you mean like this?

green on Twitter
So we know the camera sees the trailer. The question is: are they using the radar data as the primary decision point (for braking events) on a highway while driving at speed and discarding the camera data? That would explain this behavior. It would also explain why previous versions don’t stop for vehicles parked at traffic lights. The system was using mainly radar for braking events. The system would need to be damn sure of an obstacle before initiating a full stop at highway speed. I don’t think radar is good enough alone to use with confidence in this situation. Thankfully the use of camera data seems to play a more prominent role in these decisions with the most recent EAP software. I’m obviously speculating here based on my personal experience and the various threads I’ve read.
 
If this is not a classic case of AI identification failure, I don't know what is! In a sense, the truck could be interpreted as a stationary overpass - which is perfectly reasonable, so long as it also identifies its clearance as being under a meter in height. :(
This is trickier than it seems, since it could be an underpass that the road is dipping under.

 
Status
Not open for further replies.