Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Trolley Problem with FSD 12?

This site may earn commission on affiliate links.
There’s an interesting variation on the trolley problem where the two [and only two] options are to hit a bicyclist who’s wearing a helmet, or to hit a bicyclist who’s not wearing a helmet. By the above logic the car should hit the helmet-wearing bicyclist [since they’re more likely to survive], but would that unethically penalize the helmet-wearer for following the law? Might bicyclists be incentivized to stop wearing helmets because of this? This starts getting into prisoner’s dilemma territory.
The other issue with the trolley problem is that it takes place on tracks, allowing for only two possible 100% fatal outcomes, and ignores braking (mitigation).

In the real world with a car, I have multiple paths and braking. I don't think there would be sufficient time for a driver to register and ponder who was, or wasn't wearing a helmet. The original trolley problem, while an interesting thought experiment in ethics, is not very applicable to the real world, as the experiment is quite contrived, and assumes bad engineering from the start. As a real world equivalent, the sci-fi short story "Cold Equations" did a better job, but even that story gets criticism for bad engineering.
 
  • Like
Reactions: Ben W
There are perhaps a million human piloted car accidents every year. Perhaps un-realistic to expect a FSD vehicle to never get into an accident.
Goal is perfection, but perhaps just being better than human is enough to promote the technology.
 
There are perhaps a million human piloted car accidents every year. Perhaps un-realistic to expect a FSD vehicle to never get into an accident.
Goal is perfection, but perhaps just being better than human is enough to promote the technology.
It would be absolutely unrealistic to expect no accidents since there are too many variables and specifically, still loads of non-FSD cars on the road. Also, safety features only work when used, i.e., seat belts, ADAS.
 
If we structure our ethics around our laws instead of the other way around, we're doomed. If the lesser harm to society is deemed to be to kill the child, then structure the laws that way. If the lesser harm to society is to put a couple people in the hospital, then structure the laws that way. Structure the laws as you see fit, and reap the fruits of your choices. The fundamental premise is least harm.
I think we are more or less in agreement, although I'm not saying we structure our ethics around laws, but rather that a non-sentient robotaxi is going to follow the law to the letter, lest the supplier be held liable. So if there is an ethical concerns, they should be codified into the law where practical. The problem, however, is not that we have to structure our ethics around our laws, but rather that it's simply not practical (or even technologically possible) to write laws that reflect our ethics (if we can even agree on them as a society). Probably the best we can do is to keep them relatively simple (i.e. avoid leaving the lane of travel).
 
My question was based on real world examples of trolly problems, not the literally definition of it.

As others have pointed out there are obvious accidents that occur every day that a computer system can quickly determine what the least harm outcomes are.

So back to my main question, I wonder if there is a tech stack running to determine this or is it purely NN decision output.

If its pure NN then Tesla would have to feed it examples of trolly type accidents and how to best deal with them.

If there is an "ethics" stack running then Tesla should make it public.

Either way would be an interesting question for someone to ask an FSD engineer.
 
Let's skip the philosophical discussion and get to the root of the matter:

Are there any situations where you'd accept a computer killing a human? Think hard about it. We accept, as a society, a human killing another human in a car crash, and call it an "accident". But if you replace a human driver with a computer, does the same apply?
 
Are there any situations where you'd accept a computer killing a human?
Are there any situations where you'd accept a gun killing a human?

A computer has no agency. All we have is a machine with a more remote sense of control compared to something like a gun. The neural network will deterministically do exactly what it is trained to do, just like a gun does when the trigger is pulled.
 
Are there any situations where you'd accept a gun killing a human?

A computer has no agency. All we have is a machine with a more remote sense of control compared to something like a gun. The neural network will deterministically do exactly what it is trained to do, just like a gun does when the trigger is pulled.
Interesting question, but a gun typically doesn't fire on its own. I suppose if the gun were to fall off a shelf and go off, killing someone, I'd accept that. I agree that a computer has no "intent" beyond what it's programmed to do. A properly programmed computer driver, absent nefarious intent from the human programmer, will drive to the best of its ability. If it happens into a circumstance, such as a child darting out into the street with no warning, and kills that child, it would also be an "accident", and something humans would have to accept, just as if flesh and blood caused the accident.
 
Interesting question, but a gun typically doesn't fire on its own.
Nor does FSD start on its own. The consequences of starting FSD are vastly more involved than a gun, but it remains a machine.

A properly programmed computer driver, absent nefarious intent from the human programmer, will drive to the best of its ability. If it happens into a circumstance, such as a child darting out into the street with no warning, and kills that child, it would also be an "accident", and something humans would have to accept, just as if flesh and blood caused the accident.
Assuming that it was essentially impossible for a car to avoid the child, yes. If people believe that it could have been avoided, then it would be treated as a product malfunction, subject to recall, similar to the Takata airbag recall. If a person had been actively driving and the accident could have been avoided, the person would probably be shamed, but I think it would still be chalked up as an accident. That's assuming that it couldn't be demonstrated that the driver was being irresponsible in some way. It's a ticklish problem.

In the case of FSD being active in the case of a child's death, I'm sure that activists and the media would clamor for removal of FSD from the roads, despite the fact that the system probably saved a dozen lives and prevented a thousand accidents on that same day simply because it always pays attention. Certainly anyone who could possibly be responsible in that situation is going to try to deflect blame onto FSD - as they do already.
 
Assuming that it was essentially impossible for a car to avoid the child, yes. If people believe that it could have been avoided, then it would be treated as a product malfunction, subject to recall, similar to the Takata airbag recall.
Here's the problem: even if it wasn't possible to avoid, people will accuse the computer saying it was possible. It's a general bias humans have against machines. Again, I posit that humans can accept another human killing someone as an accident, but can't accept, as easily, a computer/machine.
 
Once again, these situations are extremely rare. I've looked. There are millions of crashes every year in the USA and reports of ones where the driver had to decide between two bad outcomes and made such a decision are extremely rare. The challenge has gone out to find them. If you can find more than a handful out of the hundreds of millions of car crashes there have been in the USA, that would be surprising. That alone makes this a non-issue for self-driving developers.

Secondly, the sort of situations people imagine causing these situations (and they do just imagine it, they almost never can point to a real one) are situations a self-driving car won't get into. They won't have brakes that fail. They won't go too fast around a blind corner. They will be more cautious and aware than human drivers are in these situations, not to avoid trolley problems, but just to avoid situations where it would be hard to avoid hitting one VRU or even other car.

And finally, if such a situation were to arise -- it won't -- they would just follow the law. They would stay in their right of way. They would hit whoever incorrectly invaded their right of way if they had no other choice. They would not drive up onto the sidewalk, unless certain it's clear (and probably not even then.) They would not swerve into oncoming vehicles.

The law is pretty clear about this. If you hit somebody who improperly invaded your ROW, you are not at fault. If you deliberately enter somebody else's ROW, you are at high risk of being at very serious fault. No company is going to program their car to leave their ROW, and go where they can't legally go, in order to reduce accident severity. If you want them to, change the law.
 
What's your take on V12 crossing into oncoming lanes to go around parked cars and such? That's breaking the law purely out of convenience.
And a necessary part of driving. However, if doing it right, it does not enter the RoW of another road user, except at low speeds. (For example, when you do a 3 point turn, you may enter the RoW of somebody heading towards you, and they will slow.)

But to deliberately enter the RoW of others in order to hit them? Na ga da, as GHWB would say.
 
  • Informative
Reactions: JB47394
What's your take on V12 crossing into oncoming lanes to go around parked cars and such? That's breaking the law purely out of convenience.
I have had v12.3.6 try to turn left in front of oncoming traffic on two occasions now in the same place. Also, I have had it change lanes to make a turn into a lane obstructed by a disabled car with its flashers on, and would have proceeded to merrily slam into it, save for my intervention. It also has a spot (well marked) where it routinely changes from a straight lane, into a left turn lane, so it can then proceed straight.
 
I have had v12.3.6 try to turn left in front of oncoming traffic on two occasions now in the same place. Also, I have had it change lanes to make a turn into a lane obstructed by a disabled car with its flashers on, and would have proceeded to merrily slam into it, save for my intervention. It also has a spot (well marked) where it routinely changes from a straight lane, into a left turn lane, so it can then proceed straight.
Must be getting training data from the drivers in my neighborhood
 
  • Like
Reactions: KelvinMace
I have had v12.3.6 try to turn left in front of oncoming traffic on two occasions now in the same place. Also, I have had it change lanes to make a turn into a lane obstructed by a disabled car with its flashers on, and would have proceeded to merrily slam into it, save for my intervention. It also has a spot (well marked) where it routinely changes from a straight lane, into a left turn lane, so it can then proceed straight.
BTW, I have to say I did a double-take when I saw your handle and icon! Amazed there are still fans today of that ancient comic.
 
  • Like
Reactions: KelvinMace