Let the policymakers handle the "trolley" problems
When I give talks on robocars, the most common question, asked almost all the time, is the one known as the "trolley problem" question, "What will the car do if it has to choose between killing one person or another" or other related dilemmas. I have written frequently about how this is a very low priority question in reality, much more interesting to philosophy classes than it is important. It is a super-rare event and there are much more important everyday ethical questions that self-driving car developers have to solve long before they will tackle this one.
In spite of this, the question persists in the public mind. We are fascinated and afraid of the idea of machines making life or death decisions. The tiny number of humans faced with such dilemmas don't have a detailed ethical debate in their minds; they can only go with their "gut" or very simple and quick reasoning. We are troubled because machines don't have a difference between instant and carefully pondered reactions. The one time in billions of miles(*) that a machine faces such a question it would presumably make a calculated decision based on its programming. That's foreign to our nature, and indeed not a task desired by programmers or vendors of robocars.
There have been calls to come up with "ethical calculus" algorithms and put them in the cars. As a programmer, I could imagine coding such an algorithm, but I certainly would not want to, nor would I want to be held accountable for what it does, because by definition, it's going to do something bad. The programmer's job is to make driving safer. On their own, I think most builders of robocars would try to punt the decision elsewhere if possible. The simplest way to punt the decision is to program the car to follow the law, which generally means to stay in its right-of-way. Yes, that means running over 3 toddlers who ran into the road instead of veering onto the sidewalk to run over Hitler. Staying in our lane is what the law says to do, and you are not punished for doing it. The law strongly forbids going onto the sidewalk or another lane to deliberately hit something, no matter who you might be saving.
We might not like the law, but we do have the ability to change it.
Thus I propose the following: Driving regulators should create a special panel which can rule on driving ethics questions. If a robocar developer sees a question which requires some sort of ethical calculation whose answer is unclear, they can submit that question to the panel. The panel can deliberate and provide an answer. If the developer conforms to the ruling, they are absolved of responsibility. They did the right thing.
The panel would of course have people with technical skill on it, to make sure rulings are reasonable and can be implemented. Petitioners could also appeal rulings that would impede development, though they would probably suggest answers and describe their difficulty to the panel in any petition.
The panel would not simply be presented with questions like, "How do you choose between hitting 2 adults or one child?" It might make more sense to propose formulae for evaluating multiple different situations. In the end, it would need to be reduced to something you can do with code.
Very important to the rulings would be an understanding of how certain requirements could slow down robocar development or raise costs. For example, a ruling that car must make a decision based on the number of pedestrians it might hit demands it be able to count pedestrians. Today's robocars may often be unsure whether a blob is 2 or 3 pedestrians, and nobody cares because generally the result is the same -- you don't want to hit any number of pedestrians. Likeways, requirements to know the age of people on the road demands a great deal more of the car's perception system than anybody would normally develop, particularly if you imagine you will ask it to tell a dwarf adult from a child. Writers in this space have proposed questions like "How do you choose between one motorcyclist wearing a helmet and another not wearing one?" (You are less likely to kill the helmet wearer, but the bare-headed rider is the one who accepted greater risk and broke the helmet law.) Hidden in this question is the idea that the car would need to be able to tell whether somebody is wearing a helmet or not -- a much bigger challenge for a computer than for a human. If a ruling demanded the car be able to figure this out, it makes developing the car harder just to solve an extremely rare problem.
This invokes the "meta-trolley problem." In this case, we see a life-saving technology like robocars made more difficult, and thus delayed, to solve a rare philosophical problem. That delay means more people die because the robocar technology which could have saved them was not yet available. The panels would be expected to consider this. As such, problems sent to them would not be expressed in absolutes. You might ask, "If the system assigns an 80% probability that rider 1 is wearing a helmet, do I do X or Y" after you have determined that that level of confidence is technically doable.
This is important because a lot of the "trolley problem" questions involve the car departing its right-of-way to save the life of somebody in that path. 99% of the effort going into developing robocars is devoted to making them drive safely where they are supposed to be. There will always be less effort put into making sure the car can do a good job veering off the road and on to the sidewalk. It will not be as well trained and tested identifying obstacles and hazards of the sidewalk. It's maps will not be designed for driving there. Any move out of normal driving situations increases the risk and the difficulty of the driving task. People are "general purpose" thinking machines, we can adapt to what we have never done before. Robots are not.
I believe vendors would embrace this idea because they don't want to be making these decisions themselves, and they don't want to be held accountable for them if they turn out to be wrong (or even if they turn out to be right.) Society is quite uncomfortable with machines deliberately hurting anybody, even if it's to save others. Even the panel members would not be thrilled with the job, but they would not have personal responsibility.
Neural Networks
It must be noted that all these ideas (and all other conventional ideas on ethical calculus for robots) are turned upside-down if cars are driven by neural networks trained by machine learning. Some developers hope to run the whole process this way. Some may wish to only do the "judgment on where to go" part that way. Almost everybody will use them in perception and classification. You don't program neural networks, and you don't know why they do what they do -- you only know that when you test them, they perform well, and they also are often better and dealing with unforeseen situations than traditional approaches.
As such, you can't easily program a rule (including a ruling from the panel) into such a car. You can show it examples of humans following the rule as you train it, but that's about it. Because many of the situations above involve dangerous and even fatal situations, you clearly can't show it real world examples easily, though there are some tricks you can do with robotic inflatable dummies and radio controlled cars. You may need to train it in simulation (which is useful but runs the risk of it latching onto the artifacts of simulation not seen in the real world.)
Neural network systems are currently the AI technology most capable of human-like behaviour. As such, it is suggested they could be a good choice for ethical decisions, though it is sure they would surprise everybody in certain situations, and not always in a good way. They will sometimes do things that are quite inhuman.
It has been theorized they have a perverse advantage in the legal system because they are not understood. If you can't point to a specific reason the car did something (such as running over a group of 2 people instead of single person) you can't easily show the developers were negligent in court. The vehicle "went with its gut" just like a human being.
Everyday ethical situations and the vehicle code
The panels would actually be far more useful not in solving the very rare questions, but the common questions. Real driving today in most countries involves constantly breaking or bending the rules. Most people speed. People constantly cut other people off. It is often impossible to get through traffic without technically cutting people off, which is to say moving into their path and expecting them to brake. Google caused its first accident by moving into the path of a bus it thought would brake and let them into the lane. In some of the more chaotic places of the world, a driver adhering strictly to the law would never get out of their driveway.
The panels could be asked questions like this.
- "If 80% of cars are going 10mph over the speed limit, can we do that?" I think that yes would be a good answer here.
- "If a stalled car is blocking the lane, can we go slightly over the double-yellow line to get around that car if the oncoming lane is sufficiently clear?" Again, we need the cars to know that the answer is also yes.
- "If nobody will let me in to a lane, when can I aggressively push my way in even though the car I move in front of would hit me if it maintains its speed?"
- "If I decide, one time in 100, to keep going and gently bump somebody who cuts me off capriciously in order to stop drivers from treating me like I'm not there, is that OK?"
- "If I need to make a quick 3 point turn to turn around, how much delay can I cause for oncoming traffic?"
- "If it allows left turn only on a green arrow, but my sensors give me 99.99999% confidence the turn is safe, can I make it anyway?" (This actually makes a lot of sense.)
- "Is it OK for me to park in front of a hydrant, knowing I will leave the spot at the first sound, sight or electronic message about fire crews?"
- "Can I make a rolling stop at a stop sign if my systems can do it with 99.999999% safety at that sign?"
There are many more such situations. Cars need answers to these today because they will encounter these problems every day. The existing vehicle code was written with a strong presumption that human drivers are unreliable. We see many places where things like left turns are prohibited even though they would almost always be safe, but humans can't be trusted to have highly reliable judgement. In some cases, the code has to assume human drivers will be greedy and obstruct traffic if they are not forbidden from certain activities, where robocars can be trusted to promise better behaviour. In fact, in many ways the entire vehicle code is wrong for robocars and should be completely replaced, but since that won't happen for a long time, the panels could rule on reasonable exceptions which promote robocars and improve traffic.
How often for the "big" questions?
Above, I put a (*) next to the statement that the "who do I kill?" question comes up once in a billion miles. I don't actually know how often it comes up, but I know it's very rare, probably much more rare than this. For example, human drivers only kill 12 people total in a billion miles of driving. Most fatalities are single-vehicle accident (the car ran off the road, often because the driver fell asleep or was drunk.) If I had to guess, I would suspect real "who do I kill?" questions come up after more like 100 billion miles, which is to say, 200,000 lifetimes of human driving -- a typical person will drive around 500,000 miles in their life. But at 100 billion miles it would still mean it happens 30 times/year in the USA, and frankly you don't see this on the news or in fatality reports very often.
There are arguments that put the number at a more frequent level when you consider an unmanned car's ability to do something a human driven car can't do -- namely drive off the road and crash without hurting anybody. In that case, I don't think the programmers need a lot of guidance -- the path with zero injuries is generally an easy one, though driving off the road is never risk-free. It's also true that robocars would find themselves able to make these decisions in places where we never would imagine a human doing so, or even being able to do so.
Jurisdictions
These panels would probably exist at many levels. Rules of the road are a state matter in the USA, but safety standards for car hardware and software are a federal matter. Certainly it's easier for developers to have only national rulings to worry about, but it's also not tremendously hard to load in different modules when you move from one state to another. As is the case in many other areas of law, states and countries have ways to get together to normalize the laws for practical reasons like this. It's not nearly as much of a problem when there are hardware requirements in the cars. (Though it's not out of the question a panel might want to indirectly demand a superior sensor to help a car make its determinations.)
Comments
Pedro
Fri, 2016-06-10 14:39
Permalink
Since governments will
Since governments will certainly be calling the shots in this arena, we can assume that the ever-loving bureaucrats will know exactly what to do. Every car will be networked, so in these rare circumstances all that is needed is a quick identification of those folks in the path by matching some observable traits (facial features, dimensions, gait, etc.) to government databases. The government system can quickly return a 'value score'. We all know what this means, so no need for elaboration.
brad
Fri, 2016-06-10 14:48
Permalink
That's a satire
But there are people who do imagine cars will talk to other cars, rather than be deathly afraid for security reasons of communicating with unknown parties as will be the likely reality. So some people might not get the satire!
When asked about trolley problems, I like to pose it as "Choosing between running over Hitler and Goebbels in one lane or Ghandi in the other" -- for humans the choice is obvious, but for cars, having them judge the moral value of the different people, or even tell 2 people from 1, is a very different story.
Robert Woodhead
Fri, 2016-06-10 18:32
Permalink
Trolley Settings
One idea I've toyed with to attack the trolley problem is what I call the "Assholometer". It's a dial in the car you can set that tells it how big of an asshole you are, and how it should behave in an accident situation. 1 means "I cannot live being the cause of the death of another; save them in preference to me", while 10 is "I don't care if it means a bus-full of babies is going off the bridge, save my ass".
The twist is that all the cars tell all the other cars how much of an asshole the occupant is, and they use that information to decide how to prioritize traffic. So if you set the assholometer to 10, all the other cars are going to say "Fuck me? No! Fuck you!" and not be as nice to you when your car wants to merge, or is at an intersection. So it'll take you longer to get where you're going, whereas the saint who sets his dial to 1 will basically be waved through every situation.
:)
brad
Fri, 2016-06-10 21:27
Permalink
Cute but not practical
This breaks the fundamental rule of "your technology should answer to you and not betray you." So it could be OK if you wanted to be able to set such a setting, but in the USA, forcing you to broadcast it would be what is called "compelled speech" and would not be allowed under the constitution.
And again, in reality situations like this occur only after billions of miles, so in reality the setting would be applied to almost nobody, while the retribution you describe would be to everybody. The events are so rare, in fact, that one main goal of having a panel at the DMV to rule about it is to take a super-unlikely but vexing issue off the plate of the people making the cars.
David Rostcheck
Fri, 2016-06-10 21:33
Permalink
I think the neural network approach will win anyway
and it a much easier fit with the existing law because it doesn't have explicit rules, it has deep implicit understanding. Look at how AlphaGo worked and I think we will see a similar approach. The team trains your network on expert drivers and it learns to do what they do. They give it a penalty function that prioritizes human life, property damage, and avoiding traffic infractions in that order. From then on out, it's all games, be they real or virtual. After a certain point, networks "play" against each other (or different instantiations of themselves, as AlphaGo did). This works well with the law because it's easy to set and test standards. We're already seeing such proposals ("self-driving cars must be twice as good as a human driver"). They are a straightforward sell legislatively and they can be tested: look at the stats (or administer a test), we'll see how good they are.
Programmers are hung up on explicit rules because they are programmers. But the future does not belong to programmers, it belongs to intelligence. We cannot write a rules-based (heuristic) program that can beat a human at Go, but we can train an AI that can.
brad
Sat, 2016-06-11 08:06
Permalink
This could happen
But as noted, it's at odds with the overt fascination people have with Trolley problems, with how to deal with machines deciding who lives and who dies. So long as the situations are super-rare, even neural nets would need to be "programmed" by being trained on simulations of these events. You're not going to find real world recordings of people deliberately hitting a motorcyclist wearing a helmet over one not wearing a helmet (or vice versa, whichever one is "right") so if the "trolley problems are important" crew continue to demand attention to that, you get something different from ordinary machine learning.
In away, it's similar to humans, who have both their reasoning and gut to drive them, and also the logical study of the law. Humans need both to drive.
Evan Th.
Mon, 2016-06-13 17:21
Permalink
I strongly disagree with
I strongly disagree with this proposal, because I think that it'd be practically impossible to assemble a responsible panel in existing situations. Practically speaking, for the government to hold robocar manufacturers harmless, they'll require the panel to be totally disconnected from the industry. So, panelists will be insulated from the practical consequences of their decisions to robocar development; they'll see no disincentive to requiring a car to be able to flawlessly identify a child or count the number of pedestrians. Perhaps some of them will be computer scientists who could recognize how difficult that'd be, but probably not even a majority given current politics. And, even computer scientists won't be immune from bad press attending one of their rulings.
So, while this's a real problem, I think your proposed solution is even worse. Robocar manufacturers would have to either indefinitely postpone rollout or roll out anyway ignoring this panel's rulings.
brad
Mon, 2016-06-13 21:24
Permalink
That would be a failure
And clearly not what you want to charge the panel with doing.
But can anybody imagine the companies and developers making such decisions? It is a minefield for them. If they factor in difficulty of development and result in hitting the wrong guy, it can be ruinous. Only outsiders, like this panel, can safely make such a decision. Yes, they will be criticised, but not sued out of existence or personally liable.
In many ways, the answers to some of these ethical dilemmas don't matter such as that there is an answer. The question about the two motorcyclists has 3 answers (pick one with helmet, one without, or choose at random.) All answers are bad and could get you punished. All a vendor cares about is having an answer that does not leave them liable. Now, with big numbers you could see some risk for the panel -- could they safely rule "Hit a schoolbus full of kids in your lane rather than veer onto the sidewalk and kill the grandmother" and not get a public outcry if it ever happens? Perhaps not, and they might be replaced, but that's in the very unlikely event something like this actually happens.
The panel would be charged, then, not with finding the most ethical solutions, but the most expedient ones. Because there is not one right answer to trolley problems. That's why they are problems. Both answers are wrong. That's a good role for the government, to take the heat when both answers are wrong, and do the best job it can with that unwinnable situation. But it would be charged to do this "in the context of assuring the quick progress of the technology to save lives."
I could even see a ruling that says, "It's too hard to tell 3 pedestrians from 2 today, so today, choose at random. But in 3 years we expect you to be able to tell 3 from 2, and hit the 2."
Anonymous
Thu, 2018-12-27 09:51
Permalink
Death panels!
n/t
Add new comment