Trolleys, Risk and Consequences: A Model For Understanding Robocar Morality
One of the most contentious issues in robocars are the moral issues involved in testing and deploying them. We hope they will prevent many crashes and save many lives, but know that due to imperfection, they also create a risk of causing other crashes, both in deployment, and during deployment. People regularly wonder if they should be out there tested on city streets, or ever deployed. Even with numbers that are perhaps the most overwhelmingly positive from a utilitarian standpoint, we remain uncertain.
I've written much on this over the years, have now prepared a fairly detailed analysis of why we think the way we think, both as individuals and a society, and offer a path to better understanding, by understanding the different and sometimes contradictory moral theories that exist simultaneously within ourselves, and searching for a path to reconciliation by looking at vast amounts of microrisk instead of tragedies.
With some irony, I even refer to the "trolley problem." Not the vexing and dangerous misapplication of that problem to robocars deciding "who to kill" that I have often railed against, but rather the original trolley problem, the philosophy class exercise built to help us understand our own moral thinking on issues involving machines, death and human action.
Bear with me -- this is not a short essay but I hope it's worth it.