Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Ethics of Autonomous Driving

This site may earn commission on affiliate links.
...
or six blind cancer patients stitched together into a human centipede formation, so be it.

Truly the blind leading the blind right there, yeah.

One thing’s for sure. The lawyers are gonna get rich.

Insurance rates will be interesting to watch.

On the one hand, there’s the potential for fewer accidents.

On the other, the potential for a rogue Johnny Cab (name the movie) to t-bone a bus full of hookers and beer. Who sues and for what?

I was going to say a bus full of half-nekkid nuns and a wayward ruminant, but that might be considered unethical. Or at least antithetical.

Just wait until Hyperloopian Tunnels proliferate and one of those goes sideways at 500mph somewhere in the world. Splat.

But progress we must, and progress we will.
 
This article shares some thoughts: what are yours as someone who actually drives a Tesla?

I think Mercedes Benz probably got it right when they decided already some time ago to have their AI maximize the survivability of the people in the car that the AI is driving. When everybody in the market does it this way, it will overall balance itself out and have one more important benefit: it will not lead to people trying to bypass or disable the automation in order to maximize their own safety.

If you know that you buy and operate a system that does not have your best interest at heart (but would, say, prioritize minimum loss of life in general), you will have an incentive to try and avoid such technologies, which overall will just hamper safe progress. This is the same reason why communism doesn't work.
 
  • Like
Reactions: Joe F
I say let the car hit the vehicle with the uglier passengers, (research shows facial symmetry is a factor in beauty), so have the car determine who looks more facially symmetric and hit the school bus with the kids with the crooked eyebrows, the ethical algorithm is simple and we have preserved greater beauty in the world

FINALLY!

A reason for the internal camera in the 3. ;)

I'll have to start working on my imminent-crash-face so that the car doesn't make a last minute decision to sacrifice me for the greater good.
 
I favor the utilitarian approach.

All these people discussing AI morality _say_ that it's complicated because we need to figure out what to prioritize.

But simply having lots of autonomous cars drive at appropriate speeds, use appropriate spacing, signal properly, stay in lane and react quickly would dramatically decrease collisions, injuries and the severity of injury. So the starting point for moral consideration would be a society with many fewer injuries and deaths in traffic.

The worry-worts say that the utilitarian approach will require the AI to prioritize, but I disagree in this case. In this case the utilitarian approach is to recognize that certain collision would be much less common and that behaving simply and predictably would have a positive effect. That is:
if hitting the brakes will avoid a collision, hit the brakes
else if maneuvering will avoid a collision, maneuver
else hit the brakes.

Such predictable behavior would help other humans and vehicles make decisions.
 
  • Love
Reactions: buttershrimp
if hitting the brakes will avoid a collision, hit the brakes
else if maneuvering will avoid a collision, maneuver
else hit the brakes.

This is true. What I comment on I am not saying as AnxietyRanger... just a, well, Ranger. I don't worry about this, as I expect Mercedes Benz has already set the tone for this (#24), but it is an interesting mental exercise.

The interesting ethical and well as utilitarian conversations happens when hitting the brakes or maneuvering in a certain manner will not avoid a collision, but would help diminish the overall losses - at the cost of some of the people or things involved in the scenario.

What to choose?

I guess the common scenario, now discussed for many years is this one:

Your big SUV car is self-driving - with you as the sole passenger - on a road when a tire blows out or perhaps some other thing happens in front - and the car is forced to start evasive maneuvers, possibly in compromised conditions.

The self-driving mechanism initiates brakes to stop to avoid collission altogether, but soon figures out it can not do it in time - a massive crash is moments away.

The only option left for the car is to steer and either hit the vehicle in front of you on the lane, on the opposing lane... or drive off the road, which in this case is a bridge.

The self-driving car makes a quick situation assessment, calculating the risks of injury and the number of people in the cars in front, which happens to be large families in small cars in both - it knows this because cameras and car-to-car comms tells it.

The self-driving car then decides on a suicide by driving off the bridge with you inside - the least number of casualities.
 
Last edited:
All the self driving car has to do is drive significantly better than the bottom 10-15% of human drivers to have a significant affect on accident rates. Thus, it doesn’t have to make decisions better than the rest of human drivers. Remember the perfect is the enemy of the good. I believe Elon understands this, and that is why he lets us be beta testers. As for the scenario postulated, all answers are correct.
 
The self-driving car makes a quick situation assessment, calculating the risks of injury and the number of people in the cars in front, which happens to be large families in small cars in both - it knows this because cameras and car-to-car comms tells it.

The self-driving car then decides on a suicide by driving off the bridge with you inside - the least number of casualities.


To increase the difficulty in decision making, consider both autonomous cars have the same number of passengers, 5 each, (equal wealth and facial symmetry to keep a level playing field for the goons here). By allowing the cars to collide, all 10 passengers die. Both engage in suicide swerving maneuvers, all 10 still die. What is the best option here?
 
  • Helpful
Reactions: AnxietyRanger
This is true. What I comment on I am not saying as AnxietyRanger... just a, well, Ranger. I don't worry about this, as I expect Mercedes Benz has already set the tone for this (#24), but it is an interesting mental exercise.

The interesting ethical and well as utilitarian conversations happens when hitting the brakes or maneuvering in a certain manner will not avoid a collision, but would help diminish the overall losses - at the cost of some of the people or things involved in the scenario.

What to choose?

I guess the common scenario, now discussed for many years is this one:

Your big SUV car is self-driving - with you as the sole passenger - on a road when a tire blows out or perhaps some other thing happens in front - and the car is forced to start evasive maneuvers, possibly in compromised conditions.

The self-driving mechanism initiates brakes to stop to avoid collission altogether, but soon figures out it can not do it in time - a massive crash is moments away.

The only option left for the car is to steer and either hit the vehicle in front of you on the lane, on the opposing lane... or drive off the road, which in this case is a bridge.

The self-driving car makes a quick situation assessment, calculating the risks of injury and the number of people in the cars in front, which happens to be large families in small cars in both - it knows this because cameras and car-to-car comms tells it.

The self-driving car then decides on a suicide by driving off the bridge with you inside - the least number of casualities.

I think you're missing the point of my post. My point is that it doesn't have to do those moral calculation at all. Simply having AIs that are good drivers would dramatically reduce dangerous events, leaving these hypotheticals as edge cases where a simple , more predictable system of ( brake-to-avoid, steer-to-avoid, brake-to-minimize ) would make it easier to have autonomous cars, and easier for any humans or other autonomous cars involved to make decisions.
 
I think you're missing the point of my post. My point is that it doesn't have to do those moral calculation at all. Simply having AIs that are good drivers would dramatically reduce dangerous events, leaving these hypotheticals as edge cases where a simple , more predictable system of ( brake-to-avoid, steer-to-avoid, brake-to-minimize ) would make it easier to have autonomous cars, and easier for any humans or other autonomous cars involved to make decisions.

The point I don't quite get, perhaps you can elaborate, is what would the car (the way you describe it) choose in the scenario I presented in #28?
 
This is true. What I comment on I am not saying as AnxietyRanger... just a, well, Ranger. I don't worry about this, as I expect Mercedes Benz has already set the tone for this (#24), but it is an interesting mental exercise.

The interesting ethical and well as utilitarian conversations happens when hitting the brakes or maneuvering in a certain manner will not avoid a collision, but would help diminish the overall losses - at the cost of some of the people or things involved in the scenario.

What to choose?

I guess the common scenario, now discussed for many years is this one:

Your big SUV car is self-driving - with you as the sole passenger - on a road when a tire blows out or perhaps some other thing happens in front - and the car is forced to start evasive maneuvers, possibly in compromised conditions.

The self-driving mechanism initiates brakes to stop to avoid collission altogether, but soon figures out it can not do it in time - a massive crash is moments away.

The only option left for the car is to steer and either hit the vehicle in front of you on the lane, on the opposing lane... or drive off the road, which in this case is a bridge.

The self-driving car makes a quick situation assessment, calculating the risks of injury and the number of people in the cars in front, which happens to be large families in small cars in both - it knows this because cameras and car-to-car comms tells it.

The self-driving car then decides on a suicide by driving off the bridge with you inside - the least number of casualities.

Suicide is always a bad idea... The car should seek treatment for depression with another more competent car doctor.
 
The point I don't quite get, perhaps you can elaborate, is what would the car (the way you describe it) choose in the scenario I presented in #28?

If there's an obstacle, and no room to maneuver it should just brake as fast as possible to minimize the force of impact.

Collision doesn't have to be eliminated entirely, it just needs to be as slow as possible.

But, hopefully, the autonomous or human-driven car coming in the other direction would see that there is a problem in the other lane and maneuver in anticipation of the need for oncoming vehicles to swerve around the obstacle.

We don't need to expect more of the car then of a human. There's no crime "Reacting suboptimally to things that aren't your fault."
 
If there's an obstacle, and no room to maneuver it should just brake as fast as possible to minimize the force of impact.

Collision doesn't have to be eliminated entirely, it just needs to be as slow as possible.

But, hopefully, the autonomous or human-driven car coming in the other direction would see that there is a problem in the other lane and maneuver in anticipation of the need for oncoming vehicles to swerve around the obstacle.

We don't need to expect more of the car then of a human. There's no crime "Reacting suboptimally to things that aren't your fault."

OK, fair enough, in the bridge scenario it would then simply brake as much as possible since steering could not avoid a collision. It would not choose between collisions but simply brake if no escape of all collisions exists.