Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Enough with the Trolley problem, already

More and more often in mainstream articles about robocars, I am seeing an expression of variations of the “Trolley Problem.” The latest is this article on the Atlantic website. In the classical Trolley problem, you see a train hurtling down the track about to run over 5 people, and you can switch the train to another track where it will kill one person. There are a number of variations, meant to examine our views on the morality and ethics of letting people die vs. actively participating in their deaths, even deliberately killing them to save others.

Often this is mapped into the robocar world by considering a car which is forced to run over somebody, and has to choose who to run over. Choices suggested include deciding between:

  • One person and two
  • A child and an adult
  • A person and a dog
  • A person without right-of-way vs others who have it
  • A deer vs. adding risk by swerving around it into the oncoming lane
  • The occupant or owner of the car vs. a bystander on the street
  • The destruction of an empty car vs. injury to a person who should not be on the road, but is.

I don’t want to pretend that this isn’t an interesting moral area, and it will indeed affect the law, liability and public perception. And at some point, programmers will evaluate these scenarios in their efforts. What I reject is the suggestion that this is high on the list of important issues and questions. I think it’s high on the list of questions that are interesting for philosophical debate, but that’s not the same as reality.

In reality, such choices are extremely rare. How often have you had to make such a decision, or heard of somebody making one? Ideal handling of such situations is difficult to decide, but there are many other issues to decide as well.

Secondly, in the rare situations where a human encounters such a moral dilemma, that person does not sit there and have an inner philosophical dialog on which is the most moral choice. Rather, they will go with a quick gut reaction, which is based on their character and their past thinking on such situations. Or it may not be that well based on them — it must be done quickly. A robot may be incapable of having a deep internal philosophical debate, and as such the robots will also make decisions based on their “gut,” which is to say the way they were programmed, well in advance of the event.

Focus on the trolley problem creates, to some irony, a meta-trolley problem. If people (especially lawyers advising companies or lawmakers) start expressing the view that “we can’t deploy this technology until we have a satisfactory answer to this quandry” then they face the reality that if the technology is indeed life-saving, then people will die through their advised inaction who could have been saved, in order to be sure to save the right people in the complex situations. Of course, the problem itself speaks mostly about the difference between failure to save and overt action!

I suspect companies will take very conservative decisions here, as advised by their lawyers, and they will mostly base things on the rules of the road. If there is a scenario where the car would hit somebody who actually has the right-of-way, the teams will look for a solution to that. They won’t go around a blind corner so fast they could hit a slow car or cyclist. (Humans go around blind corners too fast all the time, and usually get away with it.) They won’t swerve into oncoming lanes, even ones that appear to be empty, because society will heavily punish a car deliberately leaving its right-of-way if it ends up hurting somebody. If society wants a different result here, it will need to clarify the rules. The hard fact of the liability system is that a car facing 5 jaywalking pedestrians that swerves into the oncoming lane and hits a solo driver who was properly in her lane will face a huge liability for having left their lane, while if it hits the surprise jaywalkers, the liability is likely to be much less due to their personal responsibility. The programmers normally won’t be making that decision, the law already makes it. When they find cases where the law and precedent don’t offer any guidance, they will probably take the conservative decision, and also push for it to give that guidance. The situations will be so rare, however, that a reasonable judgement will be to not wait on getting an answer.

Real human driving does include a lot of breaking the law. There is speeding of course. There’s aggressively getting your share in merges, 4-way stops and 3-point turns. And a whole lot more. Over time, the law should evolve to deal with these questions, and make it possible for the cars to compete on an equivalent level with the humans.

We also have to understand that humans have so many accidents, that as a society we’ve come to just accept them as a fact of driving, and built a giant insurance system to arrange financial compensation for the huge volume of torts created. If we tried to resolve every car accident in the courts instead of by insurance, we would vastly increase the cost of accidents. In some places, governments have moved to no-fault claim laws because they realize that battling over something that happens so often is counterproductive, especially when from the standpoint of the insurers, it changes nothing to tweak which insurance company will pay on a case by case basis. In New Zealand, they went so far as to just eliminate liability in accidents, since in all cases the government health or auto insurance always paid every bill, funded by taxes. (This does not stop people having to fight the Accident Compensation Crown Corporation to get their claims approved, however.)

While the insurance industry total size will dwindle if robocars reduce accident rates, there are still lots of insurance programs out there that handle much smaller risks just fine, so I don’t believe insurance is going away as a solution to this problem, even if it gets smaller.