Are you claiming that you can describe a moral dilemma and its resolution in a way that I can't model with a Turing machine? I would like to see that.
Thank you kindly.
Turing machines are not useful for automotive technology anymore except for simple switches.
Engine and transmission controls have been learning machines since at least 2000. I cannot think of a decent engine control after 1996 that was not a learning machine.
Bring it on today. The car gets a brake impulse every time a bicycle object appears. A Turning machine does nothing unless it has been told what a bicycle is and how you react. Next year it will be a GizBang instead that causes an acceleration response. Gizbangs were not programmed.
The best possible AP system would be a learning system derived from very experienced drivers, or instead a vast pool of drivers, but it discards all data from drivers who crash. I like method one.
Turing Machine:
Is object a human? If so, use table to weight it's value.
If collision is possible, assign risk value to all vector options including human.
If driver risk exceeds pedestrian risk, protect driver using that vector.
Learning Machine:
Is object human? Look at stored values for outcomes.
If collision is possible, find best outcome from history.
Store outcome, and recalculate outcomes.