Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Accident while on EAP...

This site may earn commission on affiliate links.
More complete context...

"We replicated that scenario for AEB testing, with a lead vehicle making a late lane change as it closed in on the parked balloon car. No car in our test could avoid a collision beyond 30 mph, and as we neared that upper limit, the Tesla and the Subaru provided no warning or braking."

"NHTSA's stationary-vehicle AEB test is performed at a single speed, 25 mph, and it only requires that the vehicle scrub off 9.8 mph before impact. In our testing, all the cars easily cleared that low bar, but one model stood far above the rest. The Subaru Impreza—the least expensive car of the four, with a stereo-camera system that eschews the usual radar sensor—still prevented a collision at 45 mph, a higher speed than any other car here, before it nosed into the stationary inflatable target."

"In our stationary-vehicle test, the Impreza's first run at 50 mph resulted in the hardest hit of the day, punting the inflatable target at 30 mph. It was only on the second attempt that the Subaru's EyeSight system impressively trimmed the speed to just 12 mph before the collision."

Eh, their graphic paints a slightly different picture.

brake-check-automated-emergency-braking-systems-test-1-closing-in-1541141807.jpg
 
  • Informative
Reactions: ggnykk and mswlogo
Eh, their graphic paints a slightly different picture.

brake-check-automated-emergency-braking-systems-test-1-closing-in-1541141807.jpg

It's always great practice when preserving data integrity to throw out the outliers! Too bad it doesn't work that way in real life when you pile into someone at 30mph. Probably made the graphic designer's job easier.

Wasn't clear from the article if they tested the other vehicles at 50mph.

EDIT: Sounds like they didn't? "The balloon car is firm enough that we only ran our test cases until we found the speed at which an impact was unavoidable"

Still, 30mph is 2.8x better than a 50mph collision, roughly speaking. 12mph would be better though.

EDIT: One odd thing is that in the target-switching test, they clearly show in the associated video the Tesla Model S braking but still hitting the target (for the 25mph initial speed? - no idea), yet this event is not logged in their graphics (they show a complete failure at 25mph and a complete success at 20mph). And another odd thing is that in their fourth test graphic (approaching slower traffic) the Subaru data is not consistent with their description of the results. Basically their fourth graphic is all messed up and not correctly copy-edited (the annotation on collision speed is inconsistent with graphic 1 and 2). See next post.
 
Last edited:
  • Informative
Reactions: Fernand
As far as I know, Tesla has never paid out in such situations, nor ever "admitted fault" - out of the lawyerly fear that it would set an un-affordable precedent that could wipe out the company. And The Answer to what we are all asking here, namely "what exactly happened" in this and in other accidents, will, for years to come, likely be hidden behind the "proprietary" and "beta" pleas. I'd be pleasantly surprised if Tesla told us, unless it was to call it pilot error. Would it help to know for sure?

Now, I use my Model 3 in NavAP and AP a lot, on freeways, highways, and city boulevards. Sure, I take a reasonably defensive approach. But honestly, how I feel doing it depends mostly on a rising or falling wave of trust or faith. Some days I happily entrust my life to it. On other days, I'm nervous, as I consider the full consequences of error. And to remind us, we have the two recent Boeing crashes.

As a software and hardware developer, I know how buggy all code is, and how it's always pushed out before it's fully tested. Because it's impossible. There isn't enough time to test every possible combination of events. So I know it will fail spectacularly on occasion. But I'm also very impressed by how well it works most of the time. So I'm torn. And I know that MY own driving is pretty fair but will fail big time now and then. I'm not sure if "rather better than a human" is comforting.

This thread is extremely interesting in revealing, between the lines, how emotionally difficult this whole robot car business is. Some will unequivocally blame the system. Others the operator. But we're not really sure. Anybody here have similar thoughts?
 
Upon exiting, Navigation On Autopilot would decrease the speed down to zero mile per hour if there's no intervention from driver.

NoA does make some audio clues as it exits the freeway so it might help to wake up the driver.

Driver then has to press the accelerator in order to switch to Autopilot.

Speed/brakewise, NoA does not help in this case because the car was still in middle the ramp and not at the GPS designated 0 mile per hour location (end of ramp) yet.
That is incorrect. I have been using NoA for a while now, and Car does take exit lane at exit speed, and automatically gain next highway speed it merge on to. But if the other street is local street, it gradually slows down the speed until we react and set speed again. But even in this case AP active and monitors the cars ahead and slows down if car in front slows down.

So, here definitely something went wrong. Also at least AEB should have been kicked-in, and since speed is not too much, AEB should have stopped the car.

Only exception is, if driver foot slightly pressed accelerator pedal.
 
...hidden behind the "proprietary" and "beta" pleas...

There's nothing to hide:

Why Tesla's Autopilot Can't See a Stopped Firetruck

This flawed system is radar that has been around since 1934 or the past 85 years.

Each year since each of the last 85 years, researchers have hoped that success will be within reach soon.

That's why some impatient researchers thought: OK "soon" is a good thing but in the meantime, we need something while waiting.

That something is LIDAR to supplement the flawed radar.

Tesla just doesn't believe in LIDAR because it will have a better TeslaVison soon.

So, Tesla beta is really beta and there's no hiding about that fact. The Owner's Guide warns not to rely on it repeatedly.

That's because more perfected TeslaVision is not here and nor is LIDAR in a Tesla!
 
  • Informative
Reactions: Fernand
There is an event data recorder (EDR, or black box) in the Model 3. If the airbags didn't deploy, I'm not sure it would have triggered. You would think that it should. Tesla would know for sure. If it did, it would store the state of the car for the last 10 seconds (maybe a bit longer) before the accident.

Tesla has a webpage about how to get data from the EDR:
Event Data Recorder

My wife and I are dealing with a front-end collision as well. It happened at the end of January, and I wasn't in the car at the time. No injuries, thankfully, and no airbag deployment. The car was delivered to a Tesla authorized body shop about 7 weeks ago. I would like to know what the EDR says, if there is anything on it. It sounds like it would be helpful to the OP as well.

Tesla will only pull this data for you if they are requested to do so through legal channels. Otherwise, find someone with the cables and follow the instructions on that website.
 
  • Like
Reactions: Freewilly
Driver may have overriden AEB or Autopilot with accelerator. But no one will know until data is pulled from Tesla.

I think what Tesla needs is Autopilot Disagrees indicator or chime to let driver know that system is being overridden by drivers actions.
 
Guys - we've been talking about inadequacies of AEB for years over on the Model S forums.

1) autopilot DOES NOT DISENGAGE ITSELF EVER. If the driver ignores the nags, It will either a) bring the car to a complete stop with the flashers on (you can see this on youtube in 1,000 videos) or b) run forever if it thinks your hand is on the wheel. Autopilot NEVER disengages itself, which is the inherent difference between it and other systems in other cars.

Only one thing could have happened - autopilot was on when he hit the car and Tesla Vision failed to see the vehicle. There are no guarantees with AEB.

There is one exception to this which I encountered this week on 2019.8.3; and that is if and when the autopilot system either crashes or there is a sudden reduction in radar visibility. I’ve posted about this in another thread this week. Had my hands not been on the wheel while the car was taking its curve I would likely have had a collision. Granted this is a rare condition, but it’s now happened while I was diving. There was no safe disengagement, it occurred suddenly. The autopilot computer, TACC and Auto Steering was not available for 10 minutes after the event, during which time there was no road condition, vehicle or pedestrian visibility by the car as per the display. It was if I was alone on an empty road with no lane markers, cars or people around me. This caused me to really appreciate why we are still in level 2 autonomy and drew parallels to what can happen when there are non-redundant systems (single points of failure) like a single autopilot computer or radar.
 
  • Informative
Reactions: OPRCE and jsmay311
Your telling me that while the driver was asleep, he PERFECTLY pressed the accelerator to make the vehicle stay at EXACTLY the same speed as he was travelling before? That seems highly improbable.

I am well aware of the "Accelerator pressed, cruise will not brake" warning. I use it all the time when getting on to a highway.

My concern/ issue I have with you is that you are making statements based off of no factual evidence. "So the driver had the pedal pressed enough to sustain ~30mph". You don't know that. none of us do.

My statements are based off of the evidence provided thus far. You may have a theory, but please then say, "it's my belief that x,y, z happened", rather that saying "the driver did this".

Nope, doesn't have to be exact. Pedal input maps to power command, not a set MPH, so barely touching the pedal would be enough to just disable braking or regen and the car would still coast right in at the same speed. There's actually a wide range of pedal positions that would reproduce the scenario in the video. Again if you want proof try it in your own car, that's all the evidence you need. If the driver only had it pressed 1 cm, TACC would still speed up and over that amount no problem, and even follow cars moving at speed while adjusting well above that 1 cm range, but not have the ability to stop. 3 cm would do the same. If I was to guess, anything in the 0.1-5cm range could do it. Doesn't even have to be close to perfect, just go try it, it's not a theory. Next time you get on the on-ramp set your TACC to 60+ mph with the accelerator barely pressed and watch what happens. Then move it in the 1-5cm range once you're at speed. You are only overriding the speed floor input

Saying "AP failed" is a theory with no evidence. AEB was overridden as well, which is done when the accelerator is pressed. AEB is an independent system from TACC/AP and saying they both happened to fail at once just isn't even close to plausible at all when we know the driver was asleep, and we know how the car behaves with any amount of pedal deflection while AP is on.
 
Last edited:
Nope, doesn't have to be exact. Pedal input maps to power command, not a set MPH, so barely touching the pedal would be enough to just disable braking or regen and the car would still coast right in at the same speed. There's actually a wide range of pedal positions that would reproduce the scenario in the video. Again if you want proof try it in your own car, that's all the evidence you need. If the driver only had it pressed 1 cm, TACC would still speed up and over that amount no problem, and even follow cars moving at speed while adjusting well above that 1 cm range, but not have the ability to stop. 3 cm would do the same. If I was to guess, anything in the 0.1-5cm range could do it. Doesn't even have to be close to perfect, just go try it, it's not a theory. Next time you get on the on-ramp set your TACC to 60+ mph with the accelerator barely pressed and watch what happens. Then move it in the 1-5cm range once you're at speed. You are only overriding the speed floor input

Saying "AP failed" is a theory with no evidence. AEB was overridden as well, which is done when the accelerator is pressed. AEB is an independent system from TACC/AP and saying they both happened to fail at once just isn't even close to plausible at all when we know the driver was asleep, and we know how the car behaves with any amount of pedal deflection while AP is on.


I have personally experienced multiple occasions where AP has wanted to rear-end the vehicle in front of me. Sometimes even accelerating to do so. (Without me touching the accelerator).

Also, I don't recall saying "AP failed", so please don't quote me on something I didn't say. I did say "From the evidence I can see at this time, this looks like a failure of the AP system", but I did not make a definitive statement.
 
  • Like
Reactions: OPRCE
I don't think the glare has anything with the frontal collision because it was not in front.

It might be a good time to install a dash cam with audio as well because it can tell audio clues.

In the past, AP has not been very good when approaching an obstacle that's on one side within the lane and not in the very center of the lane even in a very slow speed.


Your scenario also adds that it's a curved road and the radar might have been trained for a straight path.

It's an imperfect system and hopefully, it will be able to deal with these scenarios some day.

EAP sucks around overpasses. I'm pretty sure it ignores radar data as false positives or something. Many accidents around overpasses driving on AP.
 
  • Helpful
  • Like
Reactions: OPRCE and Ulmo
Just to clarify in advance, this accident is completely my fault as I dozed off behind the wheel after a long day at work and very little sleep the night before, and it's my responsibility to maintain control of the car at all times, but I am concerned that the car didn't stop in this situation as is advertised in EAP features.

I was on a single lane highway interchange in relatively high traffic so speeds were slow, I'm guessing that we were below 30MPH as the airbags didn't deploy. In the attached videos, you can see that my Model 3 didn't veer in either direction to avoid the accident and it obviously didn't slow down. I think the glare from the sun might have played a role in not recognizing the brake lights of the Forrester in front of me, but I would think that the radar sensors would have picked up the rapidly closing distance between the car in front of me and my car.

I emailed Tesla, but haven't received a response yet. I am planning on taking the car in on Monday to get it repaired and I'm hoping they can download the data to see what might have happened.

My intention in sharing is that many of you will be more attentive when using EAP and don't be a dummy like me and fall asleep behind the wheel.

IMG_6296 - Streamable
IMG_6297 - Streamable
IMG_6295 - Streamable
Thank you.

I learned the hard way too around 1992. Luckily, I hit another car going almost identical velocity, so it was a light bump, almost no damage. Thank God, because the freeway was almost empty, and that was the only other car on the freeway, and if I hadn't hit it, the next thing on the freeway was a couple hundred foot drop off the edge, before guardrails were installed on I-280. Basically, my interpretation of how tired I was apparently was very wrong. Ever since that moment, I decided that I will interpret how tired I am much more conservatively, much more paranoid. Now, at my current age, as soon as I feel the least bit tired, I start immediately looking for the next exit. If I can't find an exit, I'll look for a safe place to pull as far off the road as possible. I almost always find an exit, but there is one or two exceptions in the last decade. I will immediately set my iPhone for a power nap 20 minutes by pushing the Siri button and saying "Siri, set alarm for 20 minutes.". Then I will put the car shape into sleeping mode, and start to rest. If I have coffee available, I do a full pre-coffee power nap: when the power nap is done, the coffee starts taking effect. Then I will walk around outside the car for a bit, and replan my trip.

I compare these two things:

- Failing to make it on time to whatever I was planning to go to.
- Dying.

I always choose failing to make it on time to whatever I was planning to go to. I don't even consider an option of "try to play with fate and get there on time". It just isn't worth it.

Thank you for your experience, too. It's obviously very serious. Thank God you are OK!

When available, I prefer Hampton Inn, although I had a nice stay at a Home2 Suites recently. That is how I solve being tired about 15% of the time, and it is very nice. I usually end up just planning to not put myself in a situation where I have long days. Sometimes that means staying home. When I have both time and money available, it means racking up some more HHonors points, which I am always happy to do.

The worst is being tired and hitting I-880 during rush hour. I end up pulling off into Newark or Fremont and looking for someplace non-ghetto to pull over. There really aren't good nap hotels around there. It's best to just not be around I-880 in the first place.
 
Never really understood this. Insurance is there for when we screw up. If I cause an accident I’d rather admit fault and have my rates go up than weasel out of it and potentially have the innocent party see increased rates.

I have a story about this and why it's in your (and obviously the insurance company's) best interest to not assume the fault is yours. Now, there are going to be cases where there's no doubt at ALL that it's your fault, blah blah blah. But seriously, It's best to not say anything and let insurance handle it. Now on to the story!

Early one morning, I was driving to work and got behind an SUV at a stoplight. Now, the road we were stopped for is one of two that follow the freeway, so there is west bound one way, the overpass then the east bound way. We were stopped before the west bound one way road. The east bound traffic light turns green (note ours is still red) and the SUV takes off and gets t-boned by a truck. We all get out make sure everyone is ok and then get back in our vehicles, pull over waiting for police, When the police get here and take my statement the driver of the truck was shocked to learn that he was not at fault. You see when he looked up after hitting the SUV, all he saw was a red light and assumed he ran it. He had even admitted to the police he was at fault before the cop got to me.

Hopefully that wasn't too confusing, I'm a horrible story teller. But, yeah, don't admit fault. You might not have actually been at fault, let the lawyers do their thing and protect you (and the insurance company, as this is NOT altruistic of them).
 
  • Informative
Reactions: T-Will
There is one exception to this which I encountered this week on 2019.8.3; and that is if and when the autopilot system either crashes or there is a sudden reduction in radar visibility. I’ve posted about this in another thread this week. Had my hands not been on the wheel while the car was taking its curve I would likely have had a collision. Granted this is a rare condition, but it’s now happened while I was diving. There was no safe disengagement, it occurred suddenly. The autopilot computer, TACC and Auto Steering was not available for 10 minutes after the event, during which time there was no road condition, vehicle or pedestrian visibility by the car as per the display. It was if I was alone on an empty road with no lane markers, cars or people around me. This caused me to really appreciate why we are still in level 2 autonomy and drew parallels to what can happen when there are non-redundant systems (single points of failure) like a single autopilot computer or radar.

Are you saying you didn't get the red hands and urgent beeping of the typical takeover immediately prompt that time, or suggesting the OP slept through that?

The system failure/radar visibility events are abrupt, but I can't imagine anyone sleeping through the alerts that have always accompanied it in my experience.
 
Are you saying you didn't get the red hands and urgent beeping of the typical takeover immediately prompt that time, or suggesting the OP slept through that?

The system failure/radar visibility events are abrupt, but I can't imagine anyone sleeping through the alerts that have always accompanied it in my experience.

For those of you may be unfamiliar and are reading this, there are two independent systems in play here, the first is the auto-pilot (computer vision system aka HW2.5 on my Model 3) the second operates more basic functions like climate control, entertainment and sensor display (MCU). In my case, it was the computer vision (auto-pilot) system that failed abruptly. It was the first time I've seen this occur in the six months of driving, whereas I've had many failures of the MCU.

To answer Saghost's question - fortunately, the MCU monitored the performance of the auto-pilot computer and did warn me to take over immediately, but it was not a graceful disengagement of auto-pilot like you might encounter if the system stops receiving input from its driver. Because it required immediate reactive steering to keep the vehicle in lane (it was navigating a curve at the time and the steering wheel began returning to its neutral position) it certainly got my attention-- perhaps more than the audible and visual warnings on the display.

Architecturally, I'm very curious to know where Tesla is going concerning redundancy of its systems. In aviation, we often have triple redundancy of critical systems. In the work NASA had done with the space shuttle, they added two layers of additional redundancy, for a total of five.

As long as we as drivers are responsible for taking over in the event of a single autopilot failure, we will continue to be at the lower levels of driving automation. Adding secondary computer vision systems (including redundant hardware and software) and additional sensors are likely needed here - especially if the goal is to improve well beyond the safety of human drivers.

We are just at the onset of building automation around computer vision models, and one of the most difficult challenges we face is making a driving decision when there is limited or conflicting data. It is truly a fascinating field, and for anyone who may be interested in learning more about its evolution may want to watch the Stanford University CS231n Winter 2016 course on Convolutional Neural Networks for Visual Recognition by Andrej Karpathy (now at Tesla as head of AI and Autopilot) - on YouTube.

Syllabus: Syllabus | CS 231N
Video:
 
  • Informative
Reactions: Matias
For those of you may be unfamiliar and are reading this, there are two independent systems in play here, the first is the auto-pilot (computer vision system aka HW2.5 on my Model 3) the second operates more basic functions like climate control, entertainment and sensor display (MCU). In my case, it was the computer vision (auto-pilot) system that failed abruptly. It was the first time I've seen this occur in the six months of driving, whereas I've had many failures of the MCU.

To answer Saghost's question - fortunately, the MCU monitored the performance of the auto-pilot computer and did warn me to take over immediately, but it was not a graceful disengagement of auto-pilot like you might encounter if the system stops receiving input from its driver. Because it required immediate reactive steering to keep the vehicle in lane (it was navigating a curve at the time and the steering wheel began returning to its neutral position) it certainly got my attention-- perhaps more than the audible and visual warnings on the display.

Architecturally, I'm very curious to know where Tesla is going concerning redundancy of its systems. In aviation, we often have triple redundancy of critical systems. In the work NASA had done with the space shuttle, they added two layers of additional redundancy, for a total of five.

As long as we as drivers are responsible for taking over in the event of a single autopilot failure, we will continue to be at the lower levels of driving automation. Adding secondary computer vision systems (including redundant hardware and software) and additional sensors are likely needed here - especially if the goal is to improve well beyond the safety of human drivers.

We are just at the onset of building automation around computer vision models, and one of the most difficult challenges we face is making a driving decision when there is limited or conflicting data. It is truly a fascinating field, and for anyone who may be interested in learning more about its evolution may want to watch the Stanford University CS231n Winter 2016 course on Convolutional Neural Networks for Visual Recognition by Andrej Karpathy (now at Tesla as head of AI and Autopilot) - on YouTube.

Syllabus: Syllabus | CS 231N
Video:

It'll be an interesting subject to sort out, certainly. There are quite a few opportunities to create redundancy in sensing and path management within the current hardware architecture, I think.

The triple camera that handles the most important parts is somewhat redundant - you can probably do a decent job without any one of those cameras by relying on the other two, though without the telephoto you'll have limited range and there are small areas that the normal and B-pillar combination will still miss without the fisheye.

For path management, HD mapping and precision GPS as a second source is an obvious choice, using either maps developed by the cars themselves as they drive routes or purchased maps from another source (self building maps has advantages in terms of updates for construction and the like.)

One thing Tesla told us about a while back that we haven't heard much about recently was radar mapping. In an effort to further protect against the stopped car problem, they were planning to have the cars identify suspicious stationary returns like overhead signs, with locations and intensity and as much other data as the car could figure out - and then upload them to the mothership you get added to a whitelist, with the plan of having the car download tiles and ignore whitelisted returns.

If that project is still going, and if they have enough objects and resolution, that could become an additional reference source - I see that object, so I must be right here.

Redundancy in processing and neural networks may be the hardest part for them. I've read they are using multiple processor cores, so far on a single common board, which seems like there are a lot of single point of failure opportunities that might take the whole board down.

Not sure if there's redundancy in communication between the sensors and processing yet either.
 
It'll be an interesting subject to sort out, certainly. There are quite a few opportunities to create redundancy in sensing and path management within the current hardware architecture, I think.

The triple camera that handles the most important parts is somewhat redundant - you can probably do a decent job without any one of those cameras by relying on the other two, though without the telephoto you'll have limited range and there are small areas that the normal and B-pillar combination will still miss without the fisheye.

For path management, HD mapping and precision GPS as a second source is an obvious choice, using either maps developed by the cars themselves as they drive routes or purchased maps from another source (self building maps has advantages in terms of updates for construction and the like.)

One thing Tesla told us about a while back that we haven't heard much about recently was radar mapping. In an effort to further protect against the stopped car problem, they were planning to have the cars identify suspicious stationary returns like overhead signs, with locations and intensity and as much other data as the car could figure out - and then upload them to the mothership you get added to a whitelist, with the plan of having the car download tiles and ignore whitelisted returns.

If that project is still going, and if they have enough objects and resolution, that could become an additional reference source - I see that object, so I must be right here.

Redundancy in processing and neural networks may be the hardest part for them. I've read they are using multiple processor cores, so far on a single common board, which seems like there are a lot of single point of failure opportunities that might take the whole board down.

Not sure if there's redundancy in communication between the sensors and processing yet either.

A couple of thoughts come to mind as I read your response; the first is with regards to path management. In static environments, where there is no construction or damage to the roadway systems (could be accidental, environmental, terror) we'd still need some sort of overriding dynamic process in play to keep safe that would not rely solely on mapping or positioning data - here it would make sense to utilize this as a potential fallback to the degree possible in the event the vision system in the car were to experience a failure. In one example I could think of, based on location and mapping data, we know a stop-light should exist - yet we don't see it due to occlusion, or line of sight blockage. Here the car may slow down, and refuse to enter the intersection until it receives positive feedback from the driver. I have this issue on almost a regular basis here where I live in NYC with obstructed traffic signals due to trucks of great height leading my car. Building vision models around hand-signals from traffic agents and drivers of vehicles and bicycles is a whole other matter - but you could see how this could potentially be a conflicting source of information with what the signals displayed.

I'll never forget two incidents that occurred near to where I lived at the time - the first was the collapse of a segment of the San Francisco-Oakland Bay Bridge following the 1989 Loma Prieta earthquake, and the second was the lifting of a drawbridge here on the east coast. In both cases, drivers continued along their path without knowledge of the danger they were about to face and plunged to their death. As autonomous automobiles develop, I would hope that we'd also work on autonomous safety systems in roadways, overpasses, tunnels and bridges that could work in tandem with one another and could signal a change in traffic flow. Eventually, linking smart-city data with smart-cars will happen and if done well would improve efficiencies in addition to safety.

The idea of mapping radar data would undoubtedly be useful, but I understand there are limitations in terms of adequately judging height, and in some cases, there can be radio interference and phantom object detection (for example, around certain airports).

In terms of redundancy, there have been some leaked part designs which hint to multi-cores (and if I recall, there was discussion around an A and B side that potentially could run independently of one another) as well as multiple sensor inputs from radar.

I spent some time listening to a speech given by the CEO of MobileEye and watched the demo released at CES 2019. It occurred to me just how complex the problem of navigating in city environments is. I watched in awe as their car waited and eventually entered an intersection while correctly identifying pedestrians and vehicles. It also took advantage of learning from the behavior of lead cars when navigating around complex situations (including leaving lane markers when appropriate), such as stopped trucks.

Once we solve for the science of it all, we'll still be faced with ethical questions, such as who gets preferential treatment during an accident, and on what basis these decisions are made. I suspect that this will vary based on locale, as these ethical questions are answered very differently in Asia and in the West.
 
  • Like
Reactions: Fernand
A couple of thoughts come to mind as I read your response; the first is with regards to path management. In static environments, where there is no construction or damage to the roadway systems (could be accidental, environmental, terror) we'd still need some sort of overriding dynamic process in play to keep safe that would not rely solely on mapping or positioning data - here it would make sense to utilize this as a potential fallback to the degree possible in the event the vision system in the car were to experience a failure. In one example I could think of, based on location and mapping data, we know a stop-light should exist - yet we don't see it due to occlusion, or line of sight blockage. Here the car may slow down, and refuse to enter the intersection until it receives positive feedback from the driver. I have this issue on almost a regular basis here where I live in NYC with obstructed traffic signals due to trucks of great height leading my car. Building vision models around hand-signals from traffic agents and drivers of vehicles and bicycles is a whole other matter - but you could see how this could potentially be a conflicting source of information with what the signals displayed.

I'll never forget two incidents that occurred near to where I lived at the time - the first was the collapse of a segment of the San Francisco-Oakland Bay Bridge following the 1989 Loma Prieta earthquake, and the second was the lifting of a drawbridge here on the east coast. In both cases, drivers continued along their path without knowledge of the danger they were about to face and plunged to their death. As autonomous automobiles develop, I would hope that we'd also work on autonomous safety systems in roadways, overpasses, tunnels and bridges that could work in tandem with one another and could signal a change in traffic flow. Eventually, linking smart-city data with smart-cars will happen and if done well would improve efficiencies in addition to safety.

The idea of mapping radar data would undoubtedly be useful, but I understand there are limitations in terms of adequately judging height, and in some cases, there can be radio interference and phantom object detection (for example, around certain airports).

In terms of redundancy, there have been some leaked part designs which hint to multi-cores (and if I recall, there was discussion around an A and B side that potentially could run independently of one another) as well as multiple sensor inputs from radar.

I spent some time listening to a speech given by the CEO of MobileEye and watched the demo released at CES 2019. It occurred to me just how complex the problem of navigating in city environments is. I watched in awe as their car waited and eventually entered an intersection while correctly identifying pedestrians and vehicles. It also took advantage of learning from the behavior of lead cars when navigating around complex situations (including leaving lane markers when appropriate), such as stopped trucks.

Once we solve for the science of it all, we'll still be faced with ethical questions, such as who gets preferential treatment during an accident, and on what basis these decisions are made. I suspect that this will vary based on locale, as these ethical questions are answered very differently in Asia and in the West.

In general I think the folks who talk about ethical decision making for self driving cars are greatly overstating the relevance of the discussion - I think that in nearly all cases, if you can see the accident coming far enough ahead to be deciding between different accident options you can avoid it, and most unavoidable accidents won't have much of a choice to make beyond reducing energy to make it less severe.
 
In general I think the folks who talk about ethical decision making for self driving cars are greatly overstating the relevance of the discussion - I think that in nearly all cases, if you can see the accident coming far enough ahead to be deciding between different accident options you can avoid it, and most unavoidable accidents won't have much of a choice to make beyond reducing energy to make it less severe.

;)

Playing devil's advocate here for a moment, there were one or two occasions where I had to apply more energy to avoid an accident. The one I remember most clearly was shortly after I learned to drive my Audi S4 (manual). There was a pickup truck that began drifting out of the left lane - I became aware of it because of the sound of his tires on the pitted surface of the roadway. I prepared for an accident by turning down the music system and becoming more alert. The next thing I knew, his left rear tire blew, and it caused him to spin sideways down the freeway. His rate of deceleration was very high, and he also began heading towards my lane. I had only seconds to downshift and accelerate to get in between him and the retaining wall on my right. As soon as I passed him, he entered my lane, and miraculously the truck righted itself without hitting anything.

The only thing I've observed so far is that every challenge we've faced has been considerably more difficult to solve than we originally estimated. Whether this applies to the ethics or just to the technology remains to be seen.