Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Florida Autopilot crash Oct 2018

This site may earn commission on affiliate links.
Should be easy to do. Let's say that a camera records 60 FPS and it processes all those frames flawlessly.

50 mph = 22.352 m/s. 22 / 60 = 0.36 meters per frame. It usually takes 2-3 frames to judge whether you are approaching something.
At that speed you should be able to figure that out at a distance of 1 meter of movement.
You know your local speed, you know whether the object grows (approaches) or shrinks (goes away) in view. This is already demonstrated in videos using their AP hardware:

Why the heck don't they switch it on???

trash bag flies onto the freeway. Your car slams on the brake. Semi rear ends you, 75 car pileup ensues, 4 people die in the resulting maelstrom.
Tesla is working on it, but it's not reliable enough yet.
 
Yah, but in the realm of "things a car shouldn't panic stop for" it fits. Phantom objects such as shadows or even oil stains are issues related to stationary object detection.

Then the object recognition deep learning implementation is crap. You can clearly see in the youtube video that it effortlessly detects everything as it should. Funny how it can detects and brakes for shadows which are stationary, but not vehicles which are easily identifiable.
 
  • Funny
  • Like
Reactions: cwerdna and OPRCE
Since you asked ...


1. Disagree: Tesla has not had mixed messages on current capability. The media has.


This implies you would agree that Tesla has not been the most clear and explicit on its AP capabilities and inherent dangers in previous hw/sw iterations?

Historically and even today I think they have made and continue to make a comparatively poor job of this. E.g. if I lend out my car, nothing in it will actively warn the unfamiliar driver of the several specific treacheries of AP while he is using it, easily lulling him into a potentially fatal overestimation of its capabilities. This lacuna contributed to how the fatality in China occurred, and it is reported that Tesla's Chinese sales language describing the car as "self-driving" was, subsequent to the ensuing lawsuit, severely toned down:
https://jalopnik.com/two-years-on-a-father-is-still-fighting-tesla-over-aut-1823189786
AP nag times have also been continually shortened, and not for no reason.


2. Disagree: first fatality was not a firetruck


Correct, it was into a slow-moving street-sweeper on the left lane of a Chinese motorway, 300m after a leading vehicle had moved aside, but that case was for all practical purposes identical to Tesla's user manual description of its radar inadequacy, "Traffic-Aware Cruise Control can not detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead." which is what I for shorthand call Firetruck Super-Destruction mode, in mocking reference to the much-vaunted FSD [full self-driving] this same fatally inadequate hardware is somehow eventually supposed to support.


3. Disagree: no such mode

Yes, that's more sarcasm, I'm afraid, to accentuate the fact that Tesla has effectively declared, by its now 3 year-old refusal to fix the problem, that Firetruck Super-Destruction mode "Is not a bug, it's a feature!". Neither chiding the bereaved/maimed with Tweets of "Yeah, you're holding it wrong!" nor slathering them with a mess of unverifiable statistics about how tremendously safe AP actually is, are an effective solution.


4. Disagree that reason the "stationary object after occluding vehicle changes lane" issue has not been solved is the lack of lawsuit.

Why then do you think Musk/Tesla has not been supremely motivated to solve [or even much discuss] this pressing embarrassment in the past 3 years?
I for one can't avoid thinking that the lack of any appreciable consequences, legal or in sales, is the major factor. Hence it has been allowed to slide in the hopes that the magic of AI will, at some point in the glowing promised future, cure all ills.


5. Disagree that it is fundamentally unsafe. If it is, cars with zero lane keeping are super-duper-fundamentally unsafe.

Lane-keeping is not really the issue and whattaboutery on the failings of other manufacturers' systems is no consolation either.

The major problem is that Tesla vehicles on Autopilot can fail to slow down or even squeak an audible warning when pile-driving you at 80mph into a stationary traffic jam on the highway. The system is fundamentally unsafe because at >50mph the current radar hardware apparently cannot distinguish between the arse end of an artic stopped in your lane and the iron guardrails along the roadside, so this data is simply ignored whereas processing of the camera data to detect/prevent the hazard is yet to be implemented or made operational. Hence a literal couple of seconds' inattention at precisely the wrong moment can equal an instant death sentence.

How can safe Highway L3 AP (never mind FSD) be supported on this hardware? IMHO it is impossible.

Even ardent Tesla fans have no problem admitting there is a severe disconnect between the CEO's hype and reality:
Elon Promises Safety Upgrades After Model 3 Suffers Severe Crash Using Autopilot | CleanTechnica


6. Disagree that the Atari features are lipstick on a pig.

It is at least a distraction from much more important outstanding work which has not been done in the last 3 years, as outlined above.


7. Disagree: The lawsuit is frivolous. Driver was not paying attention, not following the owners manual, and speeding by 10-15 MPH. Not the car's fault, not Tesla's fault.

Whereas I agree this driver committed numerous faults, Tesla cannot simply dismiss its responsibility to
recognise human nature and eliminate design defects which tend to lull into over-reliance before these contribute to further unnecessary deaths.

To that end here are a couple of recent scientific studies on the difficulties of maintaining driver engagement while using partially-automated driver assistance systems such as, and including, Tesla's AP:
https://www.researchgate.net/public..._bad_idea_Observations_from_an_on-road_study?
The Challenges of Partially Automated Driving

It arguably becomes at least partially Tesla's fault when they continue to ignore valid empirical evidence that their AP system, when combined with the average human, is in certain circumstances less safe than the same driver operating the vehicle without its alleged "assistance". The time for skating around liability by the skin of Musk's teeth is, I feel, soon coming to an end and that will actually be a good thing for us all.

However, if prudently managed, Tesla would pre-empt this looming development by designing a safer system for the interim and probably prolonged period it will realistically require to reach higher levels of autonomy [ e.g. by implementing active driver-focus monitoring via facial analysis, like Cadillac's Super Cruise
] and retrofitting that to its current vehicles before being forced to do so by regulators/law.
 
Last edited:
Then the object recognition deep learning implementation is crap. You can clearly see in the youtube video that it effortlessly detects everything as it should. Funny how it can detects and brakes for shadows which are stationary, but not vehicles which are easily identifiable.
[/QUOTE]

Yep, there is definitely something very wrong in this software implementation when at 90mph it habitually applies emergency braking for literal immobile shadows on the road surface (i.e. something which cannot be detected by radar or ultrasonic sensors, only camera) but refuses to so much as lift its hoof off the gas when seeing a stationary traffic jam looming up dead ahead at the same speed.
 
Are you sure this video is real, or is it another of their promotional videos (like the FSD video, which was a marketing bit)?

Amazing to me to think that "they have all this, why don't they turn it on"... folks here should apply for the head of engineering job at Tesla, so they can inject their genius at this obvious massive oversight by the team there. Or maybe we can eavesdrop on the exec meetings where they all laugh and say "Yeah, we could turn it on, but let's not just to piss everyone off".

Yeah, that's it.

Likewise, someone else who indicated they couldn't understand some of Tesla's limitations, because they themselves could write the FSD code, because "pathing code is easy" if given all the objects accurately. ha.. yeah, they have been working for years on perfecting A* algo for pathing - lol. It is the object detection itself using only camera images that is extremely difficult, and where the neural nets are involved, and you sorta have to get it right -- mistakes can be costly (like phantom braking in freeway speed traffic).

I am guessing they are trying their hardest to be as awesome as they can with the hardware that is practical at this level. The real good stuff (Lidar, etc.) is far more awesome, and simply not practical yet. Fact is, Tesla is leaps and bounds ahead of any other commercially available driving assist option.

I am pretty grateful for all they are doing, they have transformed my commute, and I know I have to keep paying attention to the road.

Then the object recognition deep learning implementation is crap. You can clearly see in the youtube video that it effortlessly detects everything as it should. Funny how it can detects and brakes for shadows which are stationary, but not vehicles which are easily identifiable.
 
  • Like
Reactions: MaryAnning3
Are you sure this video is real, or is it another of their promotional videos (like the FSD video, which was a marketing bit)?

The video is from someone buying an AP unit off ebay and running it.

Likewise, someone else who indicated they couldn't understand some of Tesla's limitations, because they themselves could write the FSD code, because "pathing code is easy" if given all the objects accurately. ha.. yeah, they have been working for years on perfecting A* algo for pathing - lol. It is the object detection itself using only camera images that is extremely difficult, and where the neural nets are involved, and you sorta have to get it right -- mistakes can be costly (like phantom braking in freeway speed traffic).

I am guessing they are trying their hardest to be as awesome as they can with the hardware that is practical at this level. The real good stuff (Lidar, etc.) is far more awesome, and simply not practical yet. Fact is, Tesla is leaps and bounds ahead of any other commercially available driving assist option.

I am pretty grateful for all they are doing, they have transformed my commute, and I know I have to keep paying attention to the road.

It's fine to be a total fanboy, but maybe asking some questions wouldn't hurt. Yes I still find it strange that it would ram into stationary objects. It means that they haven't enabled that code yet, maybe it will overreact and make it brake a lot, who knows. But again, why does it brake for shadows, and not a car in front of you which it already can label?
 
  • Like
Reactions: OPRCE
That video is pretty cool.

Regarding your question...
What do you consider to be possible answers to that question?
I can only imagine it is:
a) they purposefully have not enabled it to spite us
b) they are incompetent
c) or maybe... this stuff is actually hard and they don't yet have a safe way to do it

The only thing I am a fanboy of is logic, and I am struggling to find what logical point you are actually making. Why do you find the current behavior strange? That would imply you know enough about how to build such a system to make such a judgement. Maybe you do, I'd be interested, because I sure don't.

The video is from someone buying an AP unit off ebay and running it.



It's fine to be a total fanboy, but maybe asking some questions wouldn't hurt. Yes I still find it strange that it would ram into stationary objects. It means that they haven't enabled that code yet, maybe it will overreact and make it brake a lot, who knows. But again, why does it brake for shadows, and not a car in front of you which it already can label?
 
Last edited:
darwhiff.jpg
 
The only thing I am a fanboy of is logic, and I am struggling to find what logical point you are actually making. Why do you find the current behavior strange? That would imply you know enough about how to build such a system to make such a judgement. Maybe you do, I'd be interested, because I sure don't.

I have worked on computer vision yes. Detecting objects, storing their signature in various light settings and tracking them. It's not exactly trivial, it's very hard, but it's possible using a bunch of different detection methods like SIFT etc. So yeah I think I know what I'm talking about when I find it strange.
 
trash bag flies onto the freeway. Your car slams on the brake. Semi rear ends you, 75 car pileup ensues, 4 people die in the resulting maelstrom.
Tesla is working on it, but it's not reliable enough yet.

This classic has always been an absurd excuse for Tesla taking no effective action in 3 years to eliminate or ameliorate Firetruck Super-Destruction mode, during which period it has contributed to several further fatalities and injuries.

In a trashbag case the driver can simply press the accelerator while uttering a fervent oath, same as in the instances of phantom-braking which still frequently occur, at least up through all of v8.1.

I sincerely doubt AP users would actually complain about this rare additional discomfort if it would translate to any improvement in the handling of the true-but-ignored-positive case carrying an enhanced likelihood of wreckage.

We just have to make so bold as to think that the time for cover-stories and lame excuses is over, that Tesla customers should not continue to be carelessly wasted like so many guinea-pigs to avoid embarrassing those who somehow feel no shame for a shoddily cobbled-together and inherently dangerous product.
 
Paragraphs 1 and 2 disagree with each other:

You claim driver interaction is sufficient to mitigate phantom braking side effects:

In a trashbag case the driver can simply press the accelerator while uttering a fervent oath, same as in the instances of phantom-braking which still frequently occur, at least up through all of v8.1.

Yet, it is Tesla's fault people hit stopped objects?

This classic has always been an absurd excuse for Tesla taking no effective action in 3 years to eliminate or ameliorate Firetruck Super-Destruction mode, during which period it has contributed to several further fatalities and injuries.

What happened to, borrowing your own words, "In a stopped vehicle case the driver can simply press the brake while uttering a fervent oath".

Disagree paragraph 3
People do complain about phantom braking.

Disagree paragraph 4:
Driver is responsible for their safety and the safety of others. Tesla does more to protect the driver from themself and others than most other modern cars and pretty much all previous cars. The fact that there are still improvements to be made does not change that.

Further, you have no data to support that they are not working to improve stopped vehicle detection (or that previous versions have). With the new safety report, we should get more clarity on how many events it does detect. And you continue with your acknowledged made up mode name, in bold none the less.
Yes, that's more sarcasm
 
  • Like
Reactions: MP3Mike and .jg.
Paragraphs 1 and 2 disagree with each other:
You claim driver interaction is sufficient to mitigate phantom braking side effects:
Yet, it is Tesla's fault people hit stopped objects?


I honestly don't see the discrepancy.

In the first case of the driver is rudely alarmed without cause and feels compelled to hit the gas to mitigate being gratuitously rear-ended, which, while neither sufficient nor satisfactory, is generally a minimal risk, not a life-in-your-hands experience.

By contrast, in the latter case, yes, the ensemble of Tesla's AP system contributes significantly [in my estimation ~30% currently] to the risk of a step-aside stopped object collision at high speed. Naturally if the driver is attentive and expecting treacherous behaviour he will intervene immediately by hopping on the brake, as in my own encounter, and no harm results. However, due to some well-known vagaries of human nature, this self-salvation can all too easily fail to materialise. For instance, there is no reason to suppose that Walter Huang was anything other than an average AP user with everything to live for who just struck upon extreme misfortune in the timing of briefly taking his eyes off the road ahead one morning and was instantly taxed with the ultimate penalty. I certainly imagine him to have been much less of a sinner in this respect than Florida's Mr Hudson.

Anyhow the larger point which you seem to have missed is that I'm saying it would be a worthwhile tradeoff to suffer the relatively trivial irritation of more frequent phantom braking if it would reduce the treacherous risk of cruising at full speed with no warning of any kind into massive stopped objects, under the circumstance where one is being naturally lulled into relaxing attention from the driving task. The paradox is that for some weird reason no "phantom" braking occurs the only time it is in fact needed, i.e. before piling into the rear of stationary traffic at high speed, although AFAICT the same camera system has primacy in both cases.

IMHO if Tesla persists in its current course it is only a question of short time until a jury will agree with me on some percentage of contributory negligence in such cases, meaning the company already faces a severe liability it should be urgently acting to eliminate, which entails a little more work than just to pray 'n' spray another mess of worthless statistics over the next unnecessary scene of carnage.

For example, an infrared driver attentiveness sensor system atop the steering column, as in the Cadillac CT6, could be implemented and retrofitted into the existing AP fleet fairly economically, positively transforming Tesla's reputation for AP safety, compared to the tremendous damage liable to be otherwise incurred. Even if truly giving nary a *sugar* about the clients' safety, this should be a pragmatic business decision which will pay for itself several-fold.


Disagree paragraph 3
People do complain about phantom braking.


On this we agree entirely: my point there has already been restated above.


Disagree paragraph 4:
Driver is responsible for their safety and the safety of others. Tesla does more to protect the driver from themself and others than most other modern cars and pretty much all previous cars. The fact that there are still improvements to be made does not change that.


Notwithstanding all of which, Tesla is still quite likely to be held responsible for their arguable contributory negligence in allowing the foreseeable severe risk to users to persist when it could easily have been remedied, if not by the above suggested hardware modification then at the very cheapest by flipping a bit to deactivate AP at speeds above 50mph, the point at which the radar sensor becomes unreliable.


Further, you have no data to support that they are not working to improve stopped vehicle detection (or that previous versions have).

I think the fact that the system has not improved in almost 3 years since first fatality ( Gao Yaning, † 20 January 2016, Handan, Hebei, China ) is a lot of data demonstrating that Tesla have been intensely relaxed about what they were content to consider "someone else's stupid problem", namely the users of their product who were and are being needlessly endangered. At some point after this fatal phenomenon and a range of simple fixes to avoid it have been brought to Tesla's attention, yet they continue to do nothing, then the adverb must become "recklessly" and the potential legal implications that much more severe, i.e. consideration of criminal liability arises.


With the new safety report, we should get more clarity on how many events it does detect.

These Quarterly Reports are of no real help, either for informing us or reducing Tesla's legal exposure, as the AP accident data released will remain unverifiable and from a naturally biased source with most interest in twisting it to suit own purposes. Just one obvious defect in it is that Tesla happily conflates all types of accidents as having the same importance, loudly crowing over the alleged 40% overall reduction, whereas this headline can very well disguise the nasty fact that granular analysis of the data will reveal that fatal or severe accidents due to relatively rare step-aside stopped-object collisions at high speeds are actually increased fivefold by the use of AP, while the much more frequent but relatively trivial fender shunts in slow moving traffic, where the radar sensor actually is worth a damn, are reduced by 90%. Result, an amazing 40% reduction in accidents but with more fatalities/maimings than without the "assistance" of AP. Now who could sell "advanced" cars by admitting anything even remotely like that?


And you continue with your acknowledged made up mode name, in bold none the less.

Yes, and the more I think about it the more I feel justified in doing exactly that, as I believe AP/FSD is only worth doing if it is done right. Sadly, however, Tesla is presently failing to lead or even keep pace with the competition on user safety, an attitude I strongly care to adjust whether the self-righteous Mr Musk likes it or not.

But irregardless, I do sincerely thank you for this [for me at least] stimulating exchange!
 
Paragraphs 1 and 2 disagree with each other:
You claim driver interaction is sufficient to mitigate phantom braking side effects:
Yet, it is Tesla's fault people hit stopped objects?


I honestly don't see the discrepancy.

In the first case of the driver is rudely alarmed without cause and feels compelled to hit the gas to mitigate being gratuitously rear-ended, which, while neither sufficient nor satisfactory, is generally a minimal risk, not a life-in-your-hands experience.

By contrast, in the latter case, yes, the ensemble of Tesla's AP system contributes significantly [in my estimation ~30% currently] to the risk of a step-aside stopped object collision at high speed. Naturally if the driver is attentive and expecting treacherous behaviour he will intervene immediately by hopping on the brake, as in my own encounter, and no harm results. However, due to some well-known vagaries of human nature, this self-salvation can all too easily fail to materialise. For instance, there is no reason to suppose that Walter Huang was anything other than an average AP user with everything to live for who just struck upon extreme misfortune in the timing of briefly taking his eyes off the road ahead one morning and was instantly taxed with the ultimate penalty. I certainly imagine him to have been much less of a sinner in this respect than Florida's Mr Hudson.

Anyhow the larger point which you seem to have missed is that I'm saying it would be a worthwhile tradeoff to suffer the relatively trivial irritation of more frequent phantom braking if it would reduce the treacherous risk of cruising at full speed with no warning of any kind into massive stopped objects, under the circumstance where one is being naturally lulled into relaxing attention from the driving task. The paradox is that for some weird reason no "phantom" braking occurs the only time it is in fact needed, i.e. before piling into the rear of stationary traffic at high speed, although AFAICT the same camera system has primacy in both cases.

IMHO if Tesla persists in its current course it is only a question of short time until a jury will agree with me on some percentage of contributory negligence in such cases, meaning the company already faces a severe liability it should be urgently acting to eliminate, which entails a little more work than just to pray 'n' spray another mess of worthless statistics over the next unnecessary scene of carnage.

For example, an infrared driver attentiveness sensor system atop the steering column, as in the Cadillac CT6, could be implemented and retrofitted into the existing AP fleet fairly economically, positively transforming Tesla's reputation for AP safety, compared to the tremendous damage liable to be otherwise incurred. Even if truly giving nary a *sugar* about the clients' safety, this should be a pragmatic business decision which will pay for itself several-fold.


Disagree paragraph 3
People do complain about phantom braking.


On this we agree entirely: my point there has already been restated above.


Disagree paragraph 4:
Driver is responsible for their safety and the safety of others. Tesla does more to protect the driver from themself and others than most other modern cars and pretty much all previous cars. The fact that there are still improvements to be made does not change that.


Notwithstanding all of which, Tesla is still quite likely to be held responsible for their arguable contributory negligence in allowing the foreseeable severe risk to users to persist when it could easily have been remedied, if not by the above suggested hardware modification then at the very cheapest by flipping a bit to deactivate AP at speeds above 50mph, the point at which the radar sensor becomes unreliable.


Further, you have no data to support that they are not working to improve stopped vehicle detection (or that previous versions have).

I think the fact that the system has not improved in almost 3 years since first fatality ( Gao Yaning, † 20 January 2016, Handan, Hebei, China ) is a lot of data demonstrating that Tesla have been intensely relaxed about what they were content to consider "someone else's stupid problem", namely the users of their product who were and are being needlessly endangered. At some point after this fatal phenomenon and a range of simple fixes to avoid it have been brought to Tesla's attention, yet they continue to do nothing, then the adverb must become "recklessly" and the potential legal implications that much more severe, i.e. consideration of criminal liability arises.


With the new safety report, we should get more clarity on how many events it does detect.

These Quarterly Reports are of no real help, either for informing us or reducing Tesla's legal exposure, as the AP accident data released will remain unverifiable and from a naturally biased source with most interest in twisting it to suit own purposes. Just one obvious defect in it is that Tesla happily conflates all types of accidents as having the same importance, loudly crowing over the alleged 40% overall reduction, whereas this headline can very well disguise the nasty fact that granular analysis of the data will reveal that fatal or severe accidents due to relatively rare step-aside stopped-object collisions at high speeds are actually increased fivefold by the use of AP, while the much more frequent but relatively trivial fender shunts in slow moving traffic, where the radar sensor actually is worth a damn, are reduced by 90%. Result, an amazing 40% reduction in accidents but with more fatalities/maimings than without the "assistance" of AP. Now who could sell "advanced" cars by admitting anything even remotely like that?


And you continue with your acknowledged made up mode name, in bold none the less.

Yes, and the more I think about it the more I feel justified in doing exactly that, as I believe AP/FSD is only worth doing if it is done right. Sadly, however, Tesla is presently failing to lead or even keep pace with the competition on user safety, an attitude I strongly care to adjust whether the self-righteous Mr Musk likes it or not.

But irregardless, I do sincerely thank you for this [for me at least] stimulating exchange!

Phantom braking requires hitting the accelerator immediately due to a totally unforeseen event.
A stopped object requires hitting the brakes due to a visible, foreseeable hazard.
For comparison, what is the following distance on a highway, versus the distance an object would be from you in the event the car immediately in front of you veered to miss it. (the veering itself should be the trigger to take action, so add at least that car length).

In the phantom braking case, the car is taking a action with potentially bad consequences.
In the stopped object case, the car is not taking an action (the default of most cars).

I get the false sense of confidence/ complacency angle, but that has true since the days of cruise control.

Tesla's data compared their trigger level "accidents as well as near misses" to the NHSTA stats "accident". Even if all the Tesla data points were accidents, they were doing better than the general population. Regarding the severity, they are working on getting that data also
Given the degree to which accidents can vary in severity and circumstance, we’ve started an additional initiative to create a more complete picture of safety by gathering serious injury data from our customers following an accident. While we have long maintained the practice of calling our customers whenever our system detects a crash in order to see whether they need emergency assistance, we now also use these calls to understand if they sustained an injury in the crash, and if they have feedback on our current safety system. This will help us continue to improve our system and understand the rate of serious injuries over time.

An IIHS study appears to show AP did not increase personal injury rate and did decrease collision rates
In this limited analysis, HLDI found that the frequency of claims filed under PDL, BI, MedPay and PIP didn't change once Autopilot was enabled, but the frequency of collision claims fell by 13 percent.
so the less accidents but more severe narrative may not be accurate.
 
  • Like
Reactions: OPRCE and MP3Mike
to OPRCE: You clearly put a lot of effort into writing your lengthy posts which are well constructed in some ways. Some problems, however, are much more difficult they they seem. For example, some problems that may seem relatively simple, turn out to be "NP-hard" . It seems highly unlikely that the persistence of this problem is due to anything other than a surprisingly profound difficulty in solving this problem and implementing the solution. To me knowledge, no one, not Waymo or Google or Cadillac, has solved this problem. I think we can assume that Tesla has far more data than anyone on this and is highly motivated.
 
  • Like
Reactions: quickstrike12
Merriam-Webster defines "common sense" as "sound and prudent judgment based on a simple perception of the situation or facts." So, yes, I would argue that this is a desirable quality in a juror.
I would prefer a juror have more than a simple perception of the situation or facts.

Edit: It is probably true that I will frequently be disappointed in this regard.
 
Last edited:

Yep, there is definitely something very wrong in this software implementation when at 90mph it habitually applies emergency braking for literal immobile shadows on the road surface (i.e. something which cannot be detected by radar or ultrasonic sensors, only camera) but refuses to so much as lift its hoof off the gas when seeing a stationary traffic jam looming up dead ahead at the same speed.[/QUOTE]

So when you say “habitually”, you have access to fleet wide stats?

Fire Away!