Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Another tragic fatality with a semi in Florida. This time a Model 3

This site may earn commission on affiliate links.
Status
Not open for further replies.
Or, better stated, it should never be permitted to even think about driving through a bounding box.

Thus nothing under a bounding box should ever be painted green as a driveable space for the path-planner to consider choosing as a route.

Keep in mind we don't know what the AP computer would actually do with the data fed to it.

The labeling/drawing data just shows you what the neural network detects.

There is clearly a weakness in the trailer detection as it fails to get the boundary right. It doesn't get the boundary right so it simply shows what the semantic segmentation network is indicting. That kind of network will simply recognize the road and paint it green (as it is a driving surface). The neural network itself isn't going to know you're not actually supposed to drive there.

Here is what I see the data showing.

Radar -> Shows it as being stopped (it's going across so that's expected)
FishEye -> Somewhat sporadic neural network detection, but the asterisk is on so it knows it's in the way
Main -> Detected, but the asterisk doesn't show until much later on when he's next to it. Why it's on then I don't know.
Narrow -> Detected, but the asterisk doesn't show until it's in front of him (as he's turning).

So how does the AP computer deal with conflicting info? I dunno.
 
  • Like
Reactions: tracksyde
Radar -> Shows it as being stopped

That may be true where @verygreen is stopped at a light and the truck passes directly in front of him, but Banner would presumably have been closing at high speed on a stationary side-on bounding box, if the vision system picked up the truck at all, as I imagine it must have.

So how would that box be marked except as "stopped" across many consecutive frames, even if the radar can scarcely detect immobile objects at speeds >50mph (according to Tesla manual)?


@verygreen, what software version did you use to make this video?
 
Last edited:
  • Like
Reactions: Kant.Ing
Or, better stated, it should never be permitted to even think about driving through a bounding box.

Thus nothing under a bounding box should ever be painted green as a driveable space for the path-planner to consider choosing as a route.

With this evidence it seems more plausible now that in the case of Banner, where the lane lines would have been tracking (predicted) straight ahead under the perpendicular trailer, underneath of which a considerable distance ahead may have been painted green as driveable space in the crucial timeframe, and with the radar returning no reading due to the obstacle being essentially stationary, and with the bounding-box (presuming it was recognised) from the vision system not considered a sufficient impediment, that the system wrongly concluded to give the signal "keep going, no brakes required."

This certainly seems like negligent system design/programming to me and IMHO would support a suit by his dependents against Tesla for their contribution to his death.
I believe you are missing the point. This car did not drive under it, it was stopped by an aware driver. The driver was following tesla’s Instructions to remain aware. Would also like to point out this was a video captured and we are only given a still photo easily manipulated and who knows what the next frame showed
 
I believe you are missing the point.

Fascinating!

This car did not drive under it, it was stopped by an aware driver. The driver was following tesla’s Instructions to remain aware.

Amazing perspicacity!

Would also like to point out this was a video captured and we are only given a still photo easily manipulated and who knows what the next frame showed

The video is linked above so please check what happened before and after the still.

Also consider that perhaps you are the one being easily manipulated [by cognitive dissonance] here.
 
  • Helpful
Reactions: SmartElectric
I'm not fully up to speed with the thread.
I will add though that I've seen more on-board Tesla video that I'd like to specify. There is so much chiming going on, I can totally imagine to lose track of them all, or simply not hear one. Just like people are able to miss super easy to take highway exists on voice assisted navigation.
I doesn't change the fundamentals of course, if you don't pay attention to traffic, there will be a stationary fire truck or a crossing semi.

I hope to never have to deal with traffic situations like that one. Cross traffic where 80mph is the norm? Insanity!

Did someone already address, what might the highest AEP cruise speed on that stretch be?
 
The other side..., if the system did result in 50 deaths in the first week it would still be lower than an average daily fatalities on the US highways. I know this sounds harsh in that these are peoples lives, however, it just isn't going to possible to prevent all accidents. We instinctively focus on the risk side of the risk/reward relationship. I am not making feeble excuses, the system today can kill people, especially when used improperly.

While 4 people dying is a tragedy, if they where in fact not paying attention when they should have been paying attention, they are a small part of the total number people that will die from they very same thing on highways this very day. The RADAR, the vision system, the automatic emergency breaking that many cars have will not keep these people from dying, but that doesn't mean their deaths are the fault of the manufactures of the vehicles or the CEOs of the companies.

You may be hinting as something that I think we can agree on. I would agree that a level 3 system, defined as one where the driver has to remain ready to take over on a moments notice are very difficult for humans to deal with. We have seen several instances in aviation where automation has cut out because of bad input data and the pilots are unable to comprehend and correct the situation before a tragedy occurs. Level 3 seems like next step we will soon arrive at and, as you point out, this may be inherently dangerous. Time will tell how difficult it will be to cross the gap until the system is actually safer than an average human. I am still hopeful that I will live to see this occur, perhaps even with my current car. You seem to have already concluded that it is not possible with a model 3. Time may prove you are correct, but I don't believe that time has come.

As I see it 3 design problems contribute to Tesla's unacceptably high AP fatality rate:
1. A convenience-first rather than safety-first approach
The AP system is built up from disjointed simple elements, LKA and ACC, and stopping for stationary objects has traditionally never been part of the latter, yet AP is allowed to operate at speeds well beyond what Tesla recommends as safe (50mph in manual).

2. Sensor spectrum gaps possibly exacerbated by compute overload/input clipping
Tesla already admitted they reduce camera resolution and frame-rates to limit HW2.5 CPU load, but is radar sensor data also being similarly limited, and if so can this account for its apparent inadequacy to detect stationary obstacles in planned path at high speed?
I don't know but perhaps not permanently squandering ~5% of those scarce compute resources [not to mention man-years of engineering talent] on craptastic AI auto-wipers would have been actually smarter than the purblind vanity of seeking to be hailed as the 32D genius who saved $1/vehicle on an industry-proven rain-sensor?

3. A cheap-skate HMI design inadequate to compensate for 1&2
Tesla's steering-wheel torque sensor is notoriously inadequate for gauging the level of driver attention to the road. It can be fooled by a large orange, small water-bottle or AP-Buddy jammed in the spokes, and regularly fails to detect two hands in balance on the wheel.
The assumption that "something is detected hanging on wheel" necessarily imports that "human eyes are focussed on road ahead" is fundamentally unsafe, and with upcoming AV regulation is likely [in Europe anyhow] to be ruled an insufficient Driver Attentiveness Monitoring System [DAMS] for L3, and also quite probably even for L2 in the city, where close encounters with vulnerable pedestrians and cyclists will be frequent, unlike on the highway where AP has mostly been used to date.
Something like the Chevy SuperCruise IR face-tracker completely solves this problem, but Musk unfortunately cheaped-out at the design stage.

4. By way of analogy, consider designing a machine-tool safety window to keep an operator's hands separate from powered actuators; the standard is double interlock foolproof, e.g. unbridgeable mechanical and electrical systems operate independently to prevent limbs and moving parts from being in the same space at once.

5. Would it be acceptable in any industrial workplace to have one of those interlocks failing 50% of the time, exposing the worker to being maimed? No, upon discovery the union would rightly close down production until that Scheiße was rendered secure!

6. In the case of AP the independent safeties are radar and optical sensors and apparently neither are foolproof, but at least they should hopefully fail under non-overlapping scenarios, otherwise one can quickly end up Huanged, i.e. fused into a highway gore-point @75mph.
The radar (some claim) by design cannot detect stationary objects at high speed, precariously reducing the system under that scenario to reliance on one optical sensor only.

7. Tesla's AP system is presented with many varying scenarios, which it in general handles very well. Except for one tiny thing ... massive obstacles suddenly presenting as parked in the planned path when moving at high speed. This rare scenario, which may happen once a year for the average driver, it handles in the worst imaginable fashion by totally ignoring it, which is easily unimaginable, since the accumulated experience of all other encounters trains the driver [who naturally wishes to believe he has gotten his money's worth and not just been ripped-off] to extend credulity that the system will safely & competently handle mundane driving tasks.

8. Trying to build ever more complexity (FSD) on this fundamentally unsound basis (unsafe AP) is engineering malpractice, like adding several high-speed robots to interact behind the defective safety window.

9. Trying to do so by relying on one of the interlocks (optical) to not be fooled 100% of the time in order to save bringing the other (radar) up to standard is reckless [i.e. recognising the risk but proceeding to run it at another's expense].

10. In the aftermath feeble deflections like "that guy could have died from smoking anyhow" will be no defence to a charge of corporate gross negligence manslaughter for each person that dies mangled in the machine after being pulled in through the incompetently designed "safety" window, or Huanged into a solid stationary obstacle by an incompetently designed AP/FSD.

11. Tesla has so far skated through by the skin of its teeth in these AP-fatality cases but now urgently needs to get its *sugar* wired before it ends up getting crucified in court.
 
As I see it 3 design problems contribute to Tesla's unacceptably high AP fatality rate

Can you let us know in which universe 2 total deaths, ever, of people using AP correctly (and this isn't one of them- he was using it someplace the manual explicitly says not to use it) is "unacceptably high".... unless you're going for "any death ever is too many" which is.... a bit out there.
 
Last edited:
Sorry but if you are going to kick off a lengthy post with a statement that has no data to back it up, I'm not going to bother to read it.

The data is that this incident [pending confirmation] is the 4th or probably 5th fatality in a Tesla where use of AP is a contributing factor:

1. Gao Yaning, † 20 January 2016, Handan, Hebei, China
into road-sweeper on motorway, AP confirmed.

2. Joshua Brown, † 7 May 2016, Williston, Florida, USA
into truck crossing dual-carriageway, AP confirmed.

3. Walter Huang, † 23 March 2018, Mountain View, California, USA
into collapsed gore-point, AP confirmed.

4. Reinhold Röhr, † 10 May 2018, Bellinzona, Ticino, Switzerland
into divider at Autobahn construction zone, AP suspected likely but neither officially confirmed nor disproven as car plus driver were cremated in situ.

5. Jeremy Banner, † 1 March 2019, Delray Beach, Florida, USA
into truck crossing dual-carriageway, AP atm unconfirmed but appearing highly probable.

6. That's without mentioning the many more incidents of AP into stationary obstacles in planned path with fortunately non-fatal outcomes, whence sprang the moniker Firetruck Super-Destruction mode [a.k.a. ironic FSD].

7. Despite this sorry record, Tesla has apparently done nothing effective to address the contributory design flaws in AP in over 3 years since the first fatality.

8. My contention is that any car company serious about customer safety would not only have hustled to preempt the inevitable lawsuits by eliminating the design flaws identified here but would also actively collaborate with competent independent testing institutes like Thatcham Research to vividly demonstrate how their product has been made as safe as it can possibly be.

9. Instead we see studied silence from Tesla while in official EURO-NCAP tests of mid-October 2018 [on late v8.1 sw AFAICT] a Tesla Model S on HW2.5 was still failing the cut-out test at 80kmh, though [big whoop!] the FCW did this time sound off.

10. Seeking shelter behind technically true statements such as "the deceased customer's hands were not detected on the steering wheel in the final 6 seconds as our vehicle automatically accelerated him into the massive stationary obstacle in its planned path" is at this late stage a deeply pathetic self-indictment which only serves to highlight the multi-level failures in Tesla's AP, including the chronic inability of its apex management to accept any responsibility for same.

11. If continuing in the current vein, Tesla's pushed luck will surely expire when the following circumstances coincide:
A) An innocent third-party who has not agreed to any AP/FSD beta-testing is killed.
B) Their surviving next-of-kin cannot be bought off with an out-of-court settlement and NDA.
C) The Tesla does not auto-cremate and the logged data makes it to court.
D) The Tesla driver dies and the vehicle is proven to have been operating under AP/FSD on an approved highway at the time.
E) Ambitious, able and well-resourced lawyers are engaged by the plaintiff.

12. With a rapidly expanding AP fleet driven to an uncertain extent [10%?] by those liable to take literally Musk's recent ridiculous but dangerous claim that "we already have FSD on the highway", this will probably happen sooner rather than later.

13. Sticking one's head in the proverbial sand cannot prevent but only hasten the day of reckoning, whether at a personal level for those of us using AP or for the company itself.
 
Last edited:
Excellent post, well said.

Indeed, with EAP's tendency to my semis en-profile, smashing into stationary fire trucks, it's actually bafflingly fortunate that it has now been ending innocent third parties' lives. We can hardly attribute that to wonderful programming or strong machine learning. It's just amazing with so many EAP miles logged.
How can a Tesla overlook a firetruck well placed in its lane but not sweep up women pushing strollers? Perhaps the answer is that pedestrians and cyclists actually mind their surroundings better than EAP. Sure, it's been programmed to identify and avoid soft targets, but avoiding hard targets surely was programmed first technically being a highway-only tool especially at first?
It's reasonably well established, despite lack of active monitoring (which IMO could be seen as criminal negligence), that some EAP drivers pay less attention to traffic than others. It's their human nature being triggered, They outsource responsibility to the car assuming it can deal with anything that might come up, based on it rarely missing a beat in regular traffic. These sorts of accidents happening at all points to how often the Tesla approaching you will have a driver not being aware of you standing, walking, riding or driving there.

There are some things structurally wrong with the way EAP and FSD are set up to operate. Driver attention monitoring may not have seemed worth is as a temporary feature, they were about to achieve Level 5 anyway, right? Surely the EAP crashes could have been prevented had the car buzzed the driver to pay attention, take over as clearly they weren't?

Fans may argue that EAP drives better than humans, but that's only true for drivers paying attention as they should. The moment they look away, fire trucks and lane dividers show up. And as EAP gets more comfortable around the neighborhood, there will be toddlers on bikes, women with strollers, obviously drunk (or wind affected) cyclists that need a bit more room than usual, etc. I doubt that if I pretend to be a drunk barely looking forward, riding my bike one handed swerving more than typical, the car would identify that as a cyclist needing more space than usual.

Does EAP already make an emergency stop when a ball rolls onto the road? We will not have Level 5 before it does.
I used to be a kid. I got struck by a cyclist once when I had just leaped into the road. The cyclist could not avoid me. But had I been chasing a marbles, they may well have had time to brake.
 
Last edited:
  • Like
Reactions: OPRCE
Can you let us know in which universe 2 total deaths, ever, of people using AP correctly (and this isn't one of them- he was using it someplace the manual explicitly says not to use it) is "unacceptably high".... unless you're going for "any death ever is too many" which is.... a bit out there.

Unsure as to which two cases you refer there where drivers are considered to have killed themselves by using AP "the right way" but would argue it really doesn't help to create this distinction, as errare humanum est, thus we collectively can be relied upon in practice to eventually find every failure mode the system does not actively prevent.

So, yes, IMHO 5 deaths so far with no hint of a solution 3 years after the first is definitely 4 too many and also an eminently reasonable position which a jury will be easily persuaded to adopt once the question of Tesla's contributory negligence for the wrongful death of an innocent third party is fully contested before it.
 
  • Like
Reactions: Matias and Cloxxki
There's bound to have been accidents involving Teslas and possible third parties that were not reported but were caused by a driver not paying attention taking EAP for granted. Who wants their kids outside playing when a Tesla on EAP/FSD is approaching whose driver may or may not be paying attention?
 
The data is that this incident [pending confirmation] is the 4th or probably 5th fatality in a Tesla where use of AP is a contributing factor:

1. Gao Yaning, † 20 January 2016, Handan, Hebei, China
into road-sweeper on motorway, AP confirmed.

2. Joshua Brown, † 7 May 2016, Williston, Florida, USA
into truck crossing dual-carriageway, AP confirmed.

3. Walter Huang, † 23 March 2018, Mountain View, California, USA
into collapsed gore-point, AP confirmed.

4. Reinhold Röhr, † 10 May 2018, Bellinzona, Ticino, Switzerland
into divider at Autobahn construction zone, AP suspected likely but neither officially confirmed nor disproven as car plus driver were cremated in situ.

5. Jeremy Banner, † 1 March 2019, Delray Beach, Florida, USA
into truck crossing dual-carriageway, AP atm unconfirmed but appearing highly probable.

6. That's without mentioning the many more incidents of AP into stationary obstacles in planned path with fortunately non-fatal outcomes, whence sprang the moniker Firetruck Super-Destruction mode [a.k.a. ironic FSD].

7. Despite this sorry record, Tesla has apparently done nothing effective to address the contributory design flaws in AP in over 3 years since the first fatality.

8. My contention is that any car company serious about customer safety would not only have hustled to preempt the inevitable lawsuits by eliminating the design flaws identified here but would also actively collaborate with competent independent testing institutes like Thatcham Research to vividly demonstrate how their product has been made as safe as it can possibly be.

9. Instead we see studied silence from Tesla while in official EURO-NCAP tests of mid-October 2018 [on late v8.1 sw AFAICT] a Tesla Model S on HW2.5 was still failing the cut-out test at 80kmh, though [big whoop!] the FCW did this time sound off.

10. Seeking shelter behind technically true statements such as "the deceased customer's hands were not detected on the steering wheel in the final 6 seconds as our vehicle automatically accelerated him into the massive stationary obstacle in its planned path" is at this late stage a deeply pathetic self-indictment which only serves to highlight the multi-level failures in Tesla's AP, including the chronic inability of its apex management to accept any responsibility for same.

11. If continuing in the current vein, Tesla's pushed luck will surely expire when the following circumstances coincide:
A) An innocent third-party who has not agreed to any AP/FSD beta-testing is killed.
B) Their surviving next-of-kin cannot be bought off with an out-of-court settlement and NDA.
C) The Tesla does not auto-cremate and the logged data makes it to court.
D) The Tesla driver dies and the vehicle is proven to have been operating under AP/FSD on an approved highway at the time.
E) Ambitious, able and well-resourced lawyers are engaged by the plaintiff.

12. With a rapidly expanding AP fleet driven to an uncertain extent [10%?] by those liable to take literally Musk's recent ridiculous but dangerous claim that "we already have FSD on the highway", this will probably happen sooner rather than later.

13. Sticking one's head in the proverbial sand cannot prevent but only hasten the day of reckoning, whether at a personal level for those of us using AP or for the company itself.

That's it? 5 deaths in over a billion miles? Of course we want no deaths and we want the situation with semi's and other stationary objects fixed but look at the average deaths per mile driven and compare with 5/1 billion. That's amazing.

In addition, autopilot is NOT just Adaptive Cruise Control + Lanekeep assist. The videos show it, the autopilot system has a whole vision system meant to recognize and categorize objects. Adaptive cruise control just detects objects in front of you and many implementations just use radar. Lane keep assist recognizes lines. Autopilot recognizes much more than that.
 
How many lives has it saved? I'd hate to see the unacceptably high save rate get destroyed.

1. Point taken, but historically this is more or less impossible to know, though Tesla could maybe help itself by publishing (with owner's consent) the training-data video of such near-misses it may have stored where that case could be made. Failing that, with the widespread use of Teslacam in recent updates, the evidence should soon build up anyhow going forward.

2. However, it is also reasonable to think that AP "saves" prevail in the lower half of the speed spectrum, such that it succeeds at preventing many fender-benders in dense traffic at <=30mph, where the radar and/or optics do seem to reliably detect stopped traffic ahead, whereas it simultaneously exacerbates the likelihood of a fatal smash at 80mph into something stationary in the planned path, due to the combination of sensor inadequacy and an otherwise attentive driver having been lulled into a false sense of security which tempts him into texting during the crucial moments.
 
Status
Not open for further replies.