Enough with the Trolley problem, already
More and more often in mainstream articles about robocars, I am seeing an expression of variations of the classic 1960s "Trolley Problem." For example, this article on the Atlantic website is one of many. In the classical Trolley problem, you see a train hurtling down the track about to run over 5 people, and you can switch the train to another track where it will kill one person. There are a number of variations, meant to examine our views on the morality and ethics of letting people die vs. actively participating in their deaths, even deliberately killing them to save others.
Often this is mapped into the robocar world by considering a car which is forced to run over somebody, and has to choose who to run over. Choices suggested include deciding between:
- One person and two
- A child and an adult
- A person and a dog
- A person without right-of-way vs others who have it
- A deer vs. adding risk by swerving around it into the oncoming lane
- The occupant or owner of the car vs. a bystander on the street -- ie. car drives itself off a cliff with you in it to save others.
- The destruction of an empty car vs. injury to a person who should not be on the road, but is.
I don't want to pretend that this isn't an morbidly fascinating moral area, and it will indeed affect the law, liability and public perception. And at some distant future point, programmers will evaluate these scenarios in their efforts. What I reject is the suggestion that this is anywhere high on the list of important issues and questions. I think it's high on the list of questions that are interesting for philosophical class debate, but that's not the same as reality.
In reality, such choices are extremely rare. How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.
Secondly, in the rare situations where a human encounters such a moral dilemma, that person does not sit there and have an inner philosophical dialogue on which is the most moral choice. Rather, they will go with a quick gut reaction, which is based on their character and their past thinking on such situations. Or it may not be that well based on them -- it must be done quickly. A robot may be incapable of having a deep internal philosophical debate, and as such the robots will also make decisions based on their "gut," which is to say the way they were programmed, well in advance of the event. A survey on robohub showed that even humans, given time to think about it, are deeply divided both on what a car should do and even how easy it is answer the question.
The morbid focus on the trolley problem creates, to some irony, a meta-trolley problem. If people (especially lawyers advising companies or lawmakers) start expressing the view that "we can't deploy this technology until we have a satisfactory answer to this quandry" then they face the reality that if the technology is indeed life-saving, then people will die through their advised inaction who could have been saved, in order to be sure to save the right people in very rare, complex situations. Of course, the problem itself speaks mostly about the difference between "failure to save" and "overt action" to our views of the ethics of harm.
It turns out the problem has a simple answer which is highly likely to be the one taken. In almost every situation of this sort, the law already specifies who has the right of way, and who doesn't. The vehicles will be programmed to follow the law, which means that when presented with a choice of hitting something in their right-of-way and hitting something else outside the right-of-way, the car will obey the law and stay in its right-of-way. The law says this, even if it's 3 people jaywalking vs. one in the oncoming lane. If people don't like the law, they should follow the process to change it. This sort of question is actually one of the rare ones where it makes sense for policymakers, not vendors to decide the answer.
I suspect companies will take very conservative decisions here, as advised by their lawyers, and they will mostly base things on the rules of the road. If there's a risk of having to hit somebody who actually has the right-of-way, the teams will look for a solution to avoid that. They won't go around a blind corner so fast they could hit a slow car or cyclist. (Humans go around blind corners too fast all the time, and usually get away with it.) They won't swerve into oncoming lanes, even ones that appear to be empty, because society will heavily punish a car deliberately leaving its right-of-way if it ends up hurting somebody. If society wants a different result here, it will need to clarify the rules. The hard fact of the liability system is that a car facing 5 jaywalking pedestrians that swerves into the oncoming lane and hits a solo driver who was properly in her lane will face a huge liability for having left their lane, while if it hits the surprise jaywalkers, the liability is likely to be much less, or even zero, due to their personal responsibility. The programmers normally won't be making that decision, the law already makes it. When they find cases where the law and precedent don't offer any guidance, they will probably take the conservative decision, and also push for it to give that guidance. The situations will be so rare, however, that a reasonable judgement will be to not wait on getting an answer.
Real human driving does include a lot of breaking the law. There is speeding of course. There's aggressively getting your share in merges, 4-way stops and 3-point turns. And a whole lot more. Over time, the law should evolve to deal with these questions, and make it possible for the cars to compete on an equivalent level with the humans.
Swerving is particularly troublesome as an answer, because the cars are not designed to drive on the sidewalk, shoulder or in the oncoming lane. Oh, they will have some effort put into that, but these "you should not be doing this" situations will not get anywhere near the care and testing that ordinary driving in your proper right-of-way will get. As such, while the vehicles will have very good confidence in detecting obstacles in the places they should go, they will not be nearly as sure about their perceptions of obstacles where they shouldn't legally go. A car won't be as good at identifying pedestrians on the sidewalk because it should normally never drive on the sidewalk. It will instead be very good at identifying pedestrians in crosswalks or on the road. Faced with the option to avoid something by swerving onto the sidewalk, programmers will have to consider that the car can't be quite as confident it is safe to do this illegal move, even if the sidewalk is in fact perfectly clear to the human eye. (Humans are general purpose perception systems and can identify things on the sidewalk as readily as they can spot them on the road.)
It's also asking a lot more to have the cars able to identify subtleties about pedestrians near the road. If you decide a child should be spared over an adult, you're asking the car to be able to tell children from adults, children from dwarves, tall children from short adults -- all to solve this almost-never-happens problem. This is no small ask, since without this requirement, the vehicles don't even have to tell a dog from a crawling baby -- they just know they should not run over anything roughly shaped like that.
We also have to understand that humans have so many accidents, that as a society we've come to just accept them as a fact of driving, and built a giant insurance system to arrange financial compensation for the huge volume of torts created. If we tried to resolve every car accident in the courts instead of by insurance, we would vastly increase the cost of accidents. In some places, governments have moved to no-fault claim laws because they realize that battling over something that happens so often is counterproductive, especially when from the standpoint of the insurers, it changes nothing to tweak which insurance company will pay on a case by case basis. In New Zealand, they went so far as to just eliminate liability in accidents, since in all cases the government health or auto insurance always paid every bill, funded by taxes. (This does not stop people having to fight the Accident Compensation Crown Corporation to get their claims approved, however.)
While the insurance industry total size will dwindle if robocars reduce accident rates, there are still lots of insurance programs out there that handle much smaller risks just fine, so I don't believe insurance is going away as a solution to this problem, even if it gets smaller.
So are there no ethical issues?
The trolley problem fascinates us, but it's more interesting than it is real. There are real ethical questions (covered in other articles here) which need to be dealt with today. Many of them derive from the fact that human drivers violate the strict rules all the time in their driving, to the point that in many places, it's impractical or even impossible to drive strictly by the book, or even strictly by a high-conservative defensive driving. Cars must assert their rights at 4-way stops, speed, force their turn at merges and cross the double-yellow line to get around obstacles sometimes. Figuring out how to get law-bound programmers to make this work is an interesting challenge.