Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Trolley Problem - AI Ethics

This site may earn commission on affiliate links.

Dan D.

Desperately Seeking Sapience
Dec 7, 2020
1,351
1,667
Vancouver, BC
Some thoughts on the Trolley Problem. What an AI should do when it realizes there is about to be a crash/injury. I thought it would be good if there were some standard rules. This is my attempt.

Ethical Rules for AI Driver - Trolley Problem
1) An international committee decides on the 'rules' and these rules are encoded into an "Ethical Chip"
2) Every AI car has to use this chip only. That way people know that all cars operate under the same rules. (eg: Thus we don't have to fear the BMW AI more than the Volvo AI)
3) The 'rules' consider all identified cars, objects, and pedestrians
4) If there are "death to occupant" paths, the AI will not take them (otherwise who would get in the car knowing it might choose to kill them)
5) For all the rest of the paths, the AI will pick the best one that avoids injury to everyone, despite the mechanical damage
6) Should there be only paths leading to injury, the AI will pick the one that has the lowest overall injury cost (no weighting for age, race, gender or other characteristics)
7) Should there be equal injury paths, the AI will pick a random path

That way everyone witnessing the accident knows that there is a good likelihood the car will not crash into them if there is a safe path. Also they know that if there is no safe path there is a random likelihood the car will crash into them. Just like now. Fair chance for everyone.
 
We already have the 3 laws of robotics:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
We already have the 3 laws of robotics:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those aren’t laws, they are guide lines. 😉
 
  • Like
Reactions: KArnold
4) If there are "death to occupant" paths, the AI will not take them (otherwise who would get in the car knowing it might choose to kill them)

As my vehicle encounters a situation that allows only a choice between hitting and killing a pedestrian or serving to a cliff or cement obstacles and killing myself in the vehicle, I would program my vehicle to kill myself instead of the pedestrian. It's an easy choice for me and I would not buy an AI that doesn't do that.
 
As my vehicle encounters a situation that allows only a choice between hitting and killing a pedestrian or serving to a cliff or cement obstacles and killing myself in the vehicle, I would program my vehicle to kill myself instead of the pedestrian. It's an easy choice for me and I would not buy an AI that doesn't do that.
Admirable choice, and a pretty serious disclaimer for a ride-share. I guess so long as everyone in the car agrees to those terms.
 
  • Like
Reactions: Nuclear Fusion
I don't really believe there is such a thing as the "Trolley Problem" as it applies to autonomous vehicles.

Autonomous vehicles will be programmed to follow the law. Plain and simple. Autonomous vehicles will avoid obstacles and collisions as best they can, while adhering to local traffic ordinances. If you are expecting any autonomous vehicle to start making ethical decisions, that will not happen unless there is a specific law that codifies the decision making process, at which point AV software will be updated to adhere to this new law. If a so-called ethical choice involves swerving outside a lane of travel to avoid a collision, this would only be done where permitted by law or at least while able to avoid a secondary collision or where the law would allow temporary violations of the "stay in lane" rule (such as when directed into an oncoming lane by construction or emergency personnel).

The closest I can see to an ethical choice would be a case where the choice is between striking a suddenly appearing pedestrian in front and avoiding being rear ended by a truck from behind. There could be a priority call made in that scenario, although I suspect the natural priority is given to the object in front since that is what the car has control over, and hence the most legal liability.

AV software is going to be held accountable to the law, not to a code of ethics, and therefore that will be what it is constructed to obey.
 
I don't really believe there is such a thing as the "Trolley Problem" as it applies to autonomous vehicles.

Autonomous vehicles will be programmed to follow the law. Plain and simple. Autonomous vehicles will avoid obstacles and collisions as best they can, while adhering to local traffic ordinances. If you are expecting any autonomous vehicle to start making ethical decisions, that will not happen unless there is a specific law that codifies the decision making process, at which point AV software will be updated to adhere to this new law. If a so-called ethical choice involves swerving outside a lane of travel to avoid a collision, this would only be done where permitted by law or at least while able to avoid a secondary collision or where the law would allow temporary violations of the "stay in lane" rule (such as when directed into an oncoming lane by construction or emergency personnel).

The closest I can see to an ethical choice would be a case where the choice is between striking a suddenly appearing pedestrian in front and avoiding being rear ended by a truck from behind. There could be a priority call made in that scenario, although I suspect the natural priority is given to the object in front since that is what the car has control over, and hence the most legal liability.

AV software is going to be held accountable to the law, not to a code of ethics, and therefore that will be what it is constructed to obey.
Do you know if humans are held to any ethical standard? If there is a choice between a group of nuns, a group of children, and a lawyer and your vehicle has to hit one of them because a tire blew and sent you off the road. In a split second you had to choose one of those groups. I'm sure it's happened in accidents where the driver chose a group. I wonder if any court cases ever used the choice in its ruling against the driver.

Perhaps not. If it was an accident and the driver didn't specifically target a group.

Anyway, there's always a choice and that's my point. The human makes a (presumably) spontaneous choice (or just a reaction). The AI needs to 'make a choice' too, it operates on some kind of neural net, algorithm, or decision tree that has been decided ahead of time. Which group is it going to pick to run into when there are no other safe escapes? Law does not tell us, each group of people is equal. There has to be a decision, or a random choice, and it has to be encoded. That's the problem.

-edited to clarify AI 'choice'
 
Last edited:
  • Like
Reactions: Nuclear Fusion
Do you know if humans are held to any ethical standard? If there is a choice between a group of nuns, a group of children, and a lawyer and your vehicle has to hit one of them because a tire blew and sent you off the road. In a split second you had to choose one of those groups. I'm sure it's happened in accidents where the driver chose a group. I wonder if any court cases ever used the choice in its ruling against the driver.

Perhaps not. If it was an accident and the driver didn't specifically target a group.

Anyway, there's always a choice and that's my point. The human makes a (presumably) spontaneous choice (or just a reaction). The AI needs to 'make a choice' too, it operates on some kind of neural net, algorithm, or decision tree that has been decided ahead of time. Which group is it going to pick to run into when there are no other safe escapes? Law does not tell us, each group of people is equal. There has to be a decision, or a random choice, and it has to be encoded. That's the problem.

-edited to clarify AI 'choice'
I am not aware of any law that dictates one choice over another on the basis of ethics, certain not in the case of the nuns/children/lawyer class of choices. If there was, or if there would be in the future, an AV would be legally required to make the legislated choice. Otherwise I think from a practical standpoint, for split second decisions, a human is going to be going on pure instinct and gut reaction, not making complex ethical decisions in that split second, and more often than not, that instinct is going to be towards self-preservation, not sacrificing oneself for the most ethical outcome.

Assigning fault or blame will come down to very black and white factors: Was one of the parties involved in the accident in a place they shouldn't have been? Was there an impairment? (in humans: distracted driving or DUI; with AVs: was there a camera blockage?) Was there an unsafe driving condition (weather/conditions related?) Was there some kind of mechanical issue preventing proper operation of the vehicle (flat tire, etc.) Were any safety-minded laws being broken (e.g. speeding?) For AV only: was there a software bug that should have been reasonably discovered through testing and certification?

In general, this is how the legal system works and how liability is assigned. AV makers are going to be concerned with minimizing legal liability (to zero if possible) by following every single law to the letter. They will not really have any motivation to go beyond what is specified by the law, either in terms of adding additional hardware or software. I suppose there could be commercial value in going further, but realistically speaking the complexity of an "ethical system" is so far beyond an "object avoidance system" that it just doesn't seem realistic to think that the ROI would be there for this additional level of effort.

To your last point, I don't think that what an AV system is doing is making "choices". Rather it is doing path planning and object avoidance. If it sees multiple objects (representing "groups" of people) it will do what it can to avoid a collision. Slamming on the brakes is probably the primary method, but secondarily it might attempt to plan a path around the obstacle. But if there is no feasible path around the obstacle, because a different obstacle is blocking the alternate path (the other "group" in your example), it's not going to try to make a determination of which obstacle is the "lesser"--it's simply going to not "see" the alternate path because it appears blocked. To the AI, the other group (or object) would be no different than a rigid wall.
 
  • Like
Reactions: jbadger
As my vehicle encounters a situation that allows only a choice between hitting and killing a pedestrian or serving to a cliff or cement obstacles and killing myself in the vehicle, I would program my vehicle to kill myself instead of the pedestrian. It's an easy choice for me and I would not buy an AI that doesn't do that.
But what if that pedestrian is baby Hitler?!

But seriously, if it's a 1 vs. 1 problem, if the pedestrian is clearly in the wrong, such as running across a freeway, I'd like it to prioritize me.

Should the owner of the vehicle be allowed to choose which lives to prioritize?
 
Could the owner of the vehicle be allowed to choose which lives to prioritize?
It could be an option setting? Some links I found from reality and TV entertainment.


He and Ingrid are in one of the self-driving cars, as all of them are in the futuristic setting of the show, and the car nearly hits a pedestrian. Nathan, shocked, asks Ingrid if she has "prioritize occupant" on, to which she says of course she does. Nathan reveals that he prioritizes pedestrians, showing that he is considerate.
 
The vehicle occupants are enclosed in what (these days) is a well-engineered impact-mitigating cage, presumably restrained by belts (or maybe a net if sleeping) and so on. Pedestrians and cyclists are highly unprotected. I think it would be an extremely rare case in which the avoidance of an innocent* pedestrian would require subjecting the vehicle occupants to deadly or greatly injurious forces. Yes you can dream up such scenarios, but as a rule the car and its occupants can take a much greater impact without serious injury.

*An interesting branch discussion is what to do about deliberately dangerous and/or provocative actions, or non-deliberate but exceedingly irresponsible behavior:
- Challenging the autonomous vehicle to get a reaction for amusement or spite. Should this alter the balance of pedestrian vs. vehicle protection?
- Third-party endangerment, e.g. pushing another pedestrian in front of the car. Does the victim deserve extra consideration? Does the perpetrator deserve extra targeting? Could this be actually a staged incident as a yet-higher level of evil pranking, perversely encouraged by public discussion of the AI Ethics programming?
- Maybe most commonly, completely oblivious behavior on the part of the pedestrian, e.g. as they step into the street while staring at their phone, fully absorbed in a fascinating TMC forum discussion about The Trolley Problem...🙂 should this affect the decision-making, or just invoke a blast from the pedestrian warning speaker?
 
Last edited:
  • Like
Reactions: Dan D.
But what if that pedestrian is baby Hitler?!

Political/military killings should be reserved in military vehicles and should not be used by civilians.

But seriously, if it's a 1 vs. 1 problem, if the pedestrian is clearly in the wrong, such as running across a freeway

As a civilian, I don't care whether the pedestrian is a king or a homeless person in the wrong, they are all pedestrians and my vehicle should kill me instead of them.

Should the owner of the vehicle be allowed to choose which lives to prioritize?

Then it no longer becomes autonomous vehicles but partially manual vehicles with user-configurable rules of crashes to kill.

Currently, Waymo doesn't allow users to configure the speed and other settings for driving.
 
Last edited:
Thanks for posting that article. It's from 2017 and has some interesting points. However, it's not really talking about solving the Trolley Problem, or solving a self-driving crash dilemma, because those problems don't really exist at SAE Level 2.

The article talks about Tesla's Automatic Emergency Braking which is just a driver assistance feature, as are all AEB features on all cars. You as the driver are able to override the AEB partly because it is still prone to making errors but also because you are still in control of the car. If you want to accelerate or steer around something, the car is not going to prevent you. If you are taking an active role in avoidance, then for the moment, you are allowed to take that action. (Even if you're using FSD Beta/Autopilot you are still considered "in control" at Level 2).

Once the car is running in autonomous Level 3+ driving, then the car is going to have to make those decisions itself. That time has not yet come.