Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

A challenge in FSD: To CRASH gracefully.

This site may earn commission on affiliate links.
Reading another topic I was reminded about the "march of nines".
article
Think overturned cars, debris on the roadway, tornados, road defects, and any other rare but absolutely real occurrence that a driver might encounter. Thus, a vehicle that is 99.999% “safe” crashes once every 100k miles on average.

1 crash in 100k miles is probably better than average human drivers (I admit to a couple of minor fender benders over the years).

But the previous sentence is even more important: the inevitable crash.
I read a post months ago about how to crash causing the least amount of damage and death (could not find it, but did find one about Waymo)

It is a major moral challenge, when an accident is unavoidable, what should the AI do to minimize the damage, or most difficult, cause the least amount of death (or prioritize one over another).

Not unlike the joke about being in burning airplane with 4 passengers, but only 3 parachutes, and not a joke.

Note: this is about next layer below Three Laws of Robotics
 
...It is a major moral challenge, when an accident is unavoidable, what should the AI do to minimize the damage, or most difficult, cause the least amount of death (or prioritize one over another)...
That kind of intelligence will take a long time from now.

Right now, collision avoidance is still a problem. Waymo is doing pretty good but it's geofenced in its own bubble. Others still suffer collisions.

Tesla's system doesn't choose whether to collide with a cone or a person. when it crashes, it just does without discrimination.

Right now, Waymo doesn't choose whether to collide with a cone or a person either, it just doesn't move in order to avoid collisions with either one as in the case that it got stuck with a traffic cone.

I think once we allow a car to sacrifice someone in order to save some other, it would become an Autonomous Weapon System rather than a collision-avoidance system.
 
I think once we allow a car to sacrifice someone in order to save some other, it would become an Autonomous Weapon System rather than a collision-avoidance system.
It's not about deciding whom to sacrifice, its about choosing the path with least fatality / damage.
Naturally this means assign value to objects in path the car could go and calculating a value ("cost") estimate of each potential path.

Naturally all people will have a very large value compared to anything else, but then the ethics will come up, is everyone's value the same? Is a baby worth the same as the mom? Old person? Young person? Beggar vs President? so on.
 
It's not about deciding whom to sacrifice, its about choosing the path with least fatality / damage.
Naturally this means assign value to objects in path the car could go and calculating a value ("cost") estimate of each potential path.

Naturally all people will have a very large value compared to anything else, but then the ethics will come up, is everyone's value the same? Is a baby worth the same as the mom? Old person? Young person? Beggar vs President? so on.
Here is a simple choice: If you buy Mercedes will save you at the expense of non-Mercedes:

 
  • Helpful
Reactions: MontyFloyd