Reading another topic I was reminded about the "march of nines".
article
1 crash in 100k miles is probably better than average human drivers (I admit to a couple of minor fender benders over the years).
But the previous sentence is even more important: the inevitable crash.
I read a post months ago about how to crash causing the least amount of damage and death (could not find it, but did find one about Waymo)
It is a major moral challenge, when an accident is unavoidable, what should the AI do to minimize the damage, or most difficult, cause the least amount of death (or prioritize one over another).
Not unlike the joke about being in burning airplane with 4 passengers, but only 3 parachutes, and not a joke.
Note: this is about next layer below Three Laws of Robotics
article
Think overturned cars, debris on the roadway, tornados, road defects, and any other rare but absolutely real occurrence that a driver might encounter. Thus, a vehicle that is 99.999% “safe” crashes once every 100k miles on average.
1 crash in 100k miles is probably better than average human drivers (I admit to a couple of minor fender benders over the years).
But the previous sentence is even more important: the inevitable crash.
I read a post months ago about how to crash causing the least amount of damage and death (could not find it, but did find one about Waymo)
It is a major moral challenge, when an accident is unavoidable, what should the AI do to minimize the damage, or most difficult, cause the least amount of death (or prioritize one over another).
Not unlike the joke about being in burning airplane with 4 passengers, but only 3 parachutes, and not a joke.
Note: this is about next layer below Three Laws of Robotics