Trolleys, Risk and Consequences: A Model For Understanding Robocar Morality

Topic: 
Tags: 

One of the most contentious issues in robocars are the moral issues involved in testing and deploying them. We hope they will prevent many crashes and save many lives, but know that due to imperfection, they also create a risk of causing other crashes, both in deployment, and during deployment. People regularly wonder if they should be out there tested on city streets, or ever deployed. Even with numbers that are perhaps the most overwhelmingly positive from a utilitarian standpoint, we remain uncertain.

I've written much on this over the years, have now prepared a fairly detailed analysis of why we think the way we think, both as individuals and a society, and offer a path to better understanding, by understanding the different and sometimes contradictory moral theories that exist simultaneously within ourselves, and searching for a path to reconciliation by looking at vast amounts of microrisk instead of tragedies.

Fortunately not an everyday real world sight

With some irony, I even refer to the "trolley problem." Not the vexing and dangerous misapplication of that problem to robocars deciding "who to kill" that I have often railed against, but rather the original trolley problem, the philosophy class exercise built to help us understand our own moral thinking on issues involving machines, death and human action.

Bear with me -- this is not a short essay but I hope it's worth it.

See my essay at Trolleys, Risk and Consequences: A Model For Understanding Robocar Morality

Comments

"You can grab a man walking down the street, cut him up and save all 5 of them — almost nobody will do that, even though under pure utilitarian rules it is the same problem."

Perhaps, but the reason that is is not perceived as the same problem is that one would then have to fear this in day-to-day life. In other words, it would indeed save more lives---the first and only time it happens. After that, an arms race will ensue---killing far more people---as people try to protect themselves from the utilitarians. This is a different situation than the one where the people are already tied to the track, where any and all deaths can be blamed on the person who did the tying. It is actually much closer---essentially identical with---the push-off-the-fat-guy version.

Is that different people say yes or no to the "push a guy," "harvest the organs" and "derail train into house" scenarios. That's the purpose of the problem, to understand why people react differently do those. And yes, both pushing the guy and harvesting involve the murder of an innocent, and should be the same.

It's not too hard to figure out why few go for the harvesting. We do indeed blame the guy who put the people on the tracks, though in some versions of the TP, the people are just standing oblivious on the tracks. But we still are more comfortable with having a trolley be the instrument of their deaths than doing it ourselves.

But my main point is to examine how people feel about the ends justifying the means. We don't accept justifying murder, usually.

I'm definitely not a utilitarian, so the vast majority of that article is focussed on issues that I'd consider to be the wrong ones.

I think assumption of the risk is a key concept to consider. By using the public roadways, you knowingly and voluntarily expose yourself to a certain amount of risk. I don't think it's right for a robocar company to expose people to a significantly increased amount of risk without their permission, even if they think it'll save people's lives. Most companies (Uber being an exception) probably aren't crossing that line.

I also don't think, in most if not all cases, that the "testing" these companies do will in fact save any lives at all. Uber again is the easy target to pick on. Uber isn't going to be the first company to create a self-driving car. Maybe one day they'll buy self-driving car technology from someone else. But their design as of the time of the fatal crash was pretty obviously awful. There was no benefit whatsoever to them "testing" it. They likely realized that, which was why they stopped.

I know, you'll say that all companies started out that way. You'll say Google started out that way too. Here's the thing: I don't think the testing that Google did 7 years ago had any benefit either, even if they eventually create a self-driving car. The advances that have been made in AI hardware and software would have been made regardless of whether or not Google did that "testing" they did 7 years ago.

Perhaps you think the testing will not improve and speed up R&D -- most don't agree, but as long as you consider it an unsettled question, the point is that testing with safety drivers is not exposing the public to any more risk than other driving exposes them to -- to deliver pizzas or get people to work. It's exposing them to less actually, and a lot less than they are exposed to by newly minted drivers.

You start with a bad design, nobody knew how to start with an already good one. Perhaps you can argue that today or in the future, there will be a baseline open source package that lets you start from a higher level.

I think Google learned immense amounts in that testing. I was there.

If "testing with safety drivers is not exposing the public to any more risk than other driving exposes them to," then I don't think we should do anything to stop it.

Of course most people involved in trying to build self-driving cars think they're going to succeed. But almost all of them are wrong.

Yes, you learn things by learning what doesn't work. But you don't need to do much testing to know your car doesn't work, when it's as bad as Uber's was.

--

"But we still are more comfortable with having a trolley be the instrument of their deaths than doing it ourselves."

Maybe some. But I don't that explains the different answers. If the trolley goes off the tracks and kills someone sleeping in his home, people are more reluctant to accept that. So it's not just the fact that the trolley is the instrument.

I think a lot of it is that the trolley, even if it switches tracks, is just doing something that trolleys are expected to do. To be clear, I'm not saying this is a legitimate justification. The trolley problem is generally set up as an extremely convoluted situation. I'd say in most cases it is in fact an impossible scenario, because in real life you don't have that much certainty nor time to decide nor limited choices. You confuse people with a convoluted question, and a lot of them get the answer wrong. Not wrong with respect to some intrinsically correct answer, but wrong in that the answer doesn't even reflect the values that the person actually wants to have.

The fact that the trolley is just doing something that trolleys are expected to do is important. It's not sufficient, as many of the convoluted corner cases presented by trolley problems show. But most of the time, it is important.

--

In terms of the usefulness of the trolley problem to self-driving, I don't think the pure trolley problem is useful, because it is convoluted and impossible. But a large part of driving does involve knowing when it's okay to break certain rules. The stakes usually aren't so high as there being a high likelihood that any choice you make will kill someone; you never know with certainty what the outcome is going to be; and you never have only two choices. But you do have to make choices, and those choices often have risks and benefits.

The use of the trolley problem is the use it was designed for. To help study and understand human moral thinking. It has no real world application in designing software.

And yes, many teams will fail. So all we need is to figure out who that is in advance, and only let the ones who will succeed onto the road.

Wasn't the trolley problem designed for the purpose of showing the absurdity of consequentialism?

(Edit: Yeah, pretty much. https://philpapers.org/archive/FOOTPO-2.pdf "The question is why we should say, without hesitation, that the driver should steer for the less occupied track, while most of us would be appalled at the idea that the innocent man could be framed." Interestingly, in the original formulation, my suggestion of "assumption of the risk" provides a key difference between the two situations. The workers on the track are knowingly and voluntarily assuming the risk of an accident. The innocent man who is framed is not. It is a more realistic scenario than the more modern formulations, and I wouldn't hesitate in saying that the driver should steer for the less occupied track in the original formulation of the "problem." In the original formulation of the "trolley problem," there is no evildoer. It's just an accident.)

(Put another way, I'd say that the "trolley problem," in its original formulation, was not a problem at all, except to consequentialists. It's not a problem. It's one half of an example of why consequentialism is wrong.)

The trolley problem itself has no real world application in designing software. The need to make moral decisions where no decision is free from risk (and, rarely though frequently enough that it needs to be considered, where no decision is free from near-certain harm), is going to be one of the key challenges in designing self-driving cars.

As I've said before many times, I think all teams should be allowed on the road, unless and until they lose that right (so far, only Uber). I don't think they even should need advance permission, so long as there is a licensed driver in the car who is the driver. That doesn't mean I think that "safety driver" testing makes any sense, or that it is moral to use it when you have a product that you know doesn't work.

And it is still used to help people understand how their own views and those of others are sometimes consequentialist and sometimes deontological. Most people in the basic problem do throw the switch to kill only one.

It is correct that this has no application in designing software, as I have said many times.

Uber does not appear to have lost the right for long. NTSB waxed about how they had improved themselves during the investigation, and seemed to almost go so far as to say they should be allowed back. And redemption is certainly possible, though punishment (beyond the settlement they paid the family) might have made some sense.

I don't know what you mean by "doesn't work." The whole point is that all the current products in testing don't work, as in, are not ready for production release. Except perhaps Waymo, but only barely.

Unfortunately, the trolley problem has been abused by some to make some people think their views are sometimes consequentialist, even though they aren't. The whole point of the original scenario is that just about everyone would choose to direct the trolley to kill only one. (There was no switch originally. It was the driver of the trolley making the decision, which makes a significant difference.) But just because you choose the same result as a consequentialist in some situations that doesn't make your views "sometimes consequentialist." I hope you can see that. Of course the driver in the original formulation should swerve toward the one instead of toward the five. Doing otherwise would be morally repugnant. That has zero to do with consequentialism, though. And it has zero to do with the convoluted modern formulation of the "problem" involving people tied up on the tracks and a person magically placed at a switch who has a magical ability to see the future. (That scenario only serves confuse people about their moral beliefs, because it hypothesizes a world that doesn't exist; perhaps a world where consequentialism wouldn't be evil, because we could perfectly see the future.)

Consequentialism is wrong, and in fact it's not hard to see that it's outright evil. The fact that some people are taught to believe that it's sometimes right is a travesty, and certainly wasn't the point of the creator of the trolley "problem."

One application this has to designing software is that we need to learn how to make moral software, and we need to learn that consequentialism is immoral. We need to teach the software how to make the decision to try to cause the least amount of harm, when in a situation where a significant risk of causing harm is unavoidable. We need to teach the car to hit the other vehicle, and not the child that darted out in front of it, if that's the choice. (In Uber's case, they needed to teach the car to hit the brakes, and not a jaywalker.)

NTSB doesn't govern where Uber can test. Their report was substantially flawed, and it's doubtful very many states will pay any attention to it. That said, sure, companies can and do earn their rights back.

It seems like you do know what I mean by "doesn't work." All the current products in testing don't work, as in, are not ready for production release, except perhaps Waymo, but only barely. And except for the ADAS products that are already in production, of course. It's wrong for any of those companies with completely unfinished products to be doing significant safety driver testing on public streets. It shouldn't be illegal, as I have explained, except maybe for Uber, so long as the safety driver's are properly trained. But it's the wrong approach.

You don't teach a human how to drive by throwing a five-year-old on the roads and having someone grab the wheel whenever they make a mistake. Doing so would be dumb in addition to being immoral. And a five-year-old is a lot smarter than Uber's self-driving software.

That's a pretty bold pronouncement about one of the oldest and grandest debates in moral philosophy. As I point out, societies and laws are often consequentialist, and most management of risk is done that way.

"try to cause the least amount of harm" sounds pretty consequentialist.

And as I said, we do risk math in a much more utilitarian way. The data show current road testing, when done to certain basic standards of care, is slightly less risky than ordinary driving, which is done for a million lesser reasons than getting us safer cars. I don't think that's wrong.

Consequentialism is "the doctrine that the morality of an action is to be judged solely by its consequences." A moral doctrine that takes the consequences of actions into consideration along with other things is not consequentialism. It's not partially consequentialism. It's something other than consequentialism.

Consequentialism is wrong. Just because a debate is old and grand doesn't mean I can't express my belief about it. Feel free to express your disagreement, if you'd like.

I don't think that safety driver testing, when done in the early stages of development, "gets us safer cars." In fact, I think it wastes time and money that could be more wisely spent. So I think it's wrong. There are other reasons that people drive cars that are also wrong. Not necessarily illegal, but wrong.

I understand that it is your view that consequentialism is wrong, but that's an assertion of opinion, not a generally accepted fact. And you're doing the no-true-Scotsman thing back because it is possible to talk about a spectrum from pure consequentialism to mostly consequentialist to purely deontological. It is a worthwhile spectrum to entertain. If you say that unless somebody is 100% pure they are now following a different system isn't very useful, we just have to lay it out as a spectrum of that other system.

In this decade, all the teams started out on private test tracks. Well, almost all. At a certain time they all decided that to get real progress, they needed to go out on the real roads. Building simulators able to get real results was a long way in the future in that time -- and a big project that most teams have yet to get in a really good state. By speeding up the work, it sped up the path to safer cars. If that's just my opinion, it's a pretty widely held one.

The definition I quoted was from the OED. The Stanford Encyclopedia of Philosophy says, "Consequentialism, as its name suggests, is simply the view that normative properties depend only on consequences." The key word there is "only." Of course consequences matter, but consequentialists say that it is only consequences that matter. I'm not sure how you could redefine consequentialism as a spectrum. If this is more than a theory that you've come up with on your own, I'd be interested in a reference.

I don't think that the wrong paths that many teams went down several years ago sped things up. That may be a minority opinion, but it's mine.

Learning what doesn't work isn't a wrong path. It's part of R&D. However, they also learned lots of things that do work, things they needed to make work in their systems.

And while 99% of people who have actually done it don't agree they could have learned everything they learned in simulator, I don't think anybody thinks it could have been learned as fast.

As for a spectrum -- a philosophy can depend on consequences to a lesser or greater degree, of which "only" depending on them is just the most extreme degree. It's one reason that people will say things like "You should not murder one person to save 10" but are not so ready to say it on "You should not murder one person to save a billion." The consequences do matter, but just not as completely as for the person who would murder one person to save 2.

I never said anything about "simulator." Simulators are even more useless than "safety driver testing."

Some teams took very wrong paths. To some extent, all teams, at least all teams that have been around a while, have taken wrong paths. A wrong path is when you build up a large codebase that you eventually just throw away. AI projects are filled with wrong paths, and the revolutionary progress that has been made in a relatively short period of time has not been made from testing self-driving cars with safety drivers. In fact, the vast majority of it is not specific to self-driving car technology at all. The software is vastly better, and the hardware is vastly better, and it would have been vastly better regardless of whether or not a single safety driver ever drove a development vehicle.

99% of people who have actually done what? 99% of people who have failed to build a self-driving car that is actually saving lives? No one has succeeded at building and deploying a self-driving car that is actually saving lives yet. The people who work at Tesla have succeeded in building an ADAS that is actually saving lives. But they didn't use "safety driver testing," at least not in any significant amount. They didn't use much in the way of simulator testing either. They've primarily tested their system the right way -- shadow testing.

As far as your theories on philosophy, I'll stick to what I've read from actual philosophers.

Anyone who says "you should not murder one person to save a billion" has a messed up definition of "murder." If it's the right thing to do, it's not murder, it's justifiable homicide. In any case, as I've already said, of course consequences matter. Consequentialists say that it is only consequences that matter. Those who say that consequences matter, but are not the only thing that matters, are, according to the definitions from the OED and the SEP, non-consequentialists.

Your definition may vary from those sources, which match pretty much everything I've ever read about consequentialism from actual philosophors. That's fine, and now that I'm aware of that I'll read what you say with that in mind.

(I should probably add that it's only in fantasy that you ever have the choice to "murder" one person in order to save a billion. Consequentialists, especially utilitarians, like to set up these unrealistic situations in order to trick people. If the world actually worked in the way these hypothetical worlds worked, consequentialism and utilitarianism might make sense. But that's not how the world actually works. The way the real world works, if you think you can save a billion by murdering one, chances are you are a delusional sociopath. And that's not just a hypothetical. History books are filled with evil people who committed murder under the delusion that they were doing so for the sake of the greater good. Yes, it's bold of me to say that consequentialism is evil. But it's not just a hypothetical claim.)

You have a more charitable view of Tesla than most. Tesla does some shadow testing, and some simulator testing, but most of their testing was done using customers supervising the cars, and people were deeply shocked when that was first released, though we're used to it now.

I agree murder to save billion is indeed a fantasy hypothesis. The most common scenario (common in fiction, rarer in real life) is the ticking-time-bomb torture scenario. And there are people who say "torture is evil and wrong but is justified if it will save many lives."

I am not denying that consequentialism in its pure form says only consequences matter. I am saying that somebody's moral decisions can be closer or further from that pole, and be called more and less consequentialist because of it.

As I said, you can call it whatever you want.

People who say "torture is evil and wrong but is justified if it will save many lives" might have derived that rule through consequentialism. They might have derived it from deontological reasoning. Or maybe they're just being pragmatic. Possibly they're just confused or ambivalent. Or maybe it's something else.

Torture of whom, incidentally? Unless we're talking about torture of an innocent person, it seems to me that "torture is evil and wrong but is justified if it will save many lives" is just a euphemism for "torture is not inherently wrong." If we are talking about torture of an innocent person, now we're in the realm of unrealistic fiction, I think.

But if we want to know if this is consequentialism or something else, the question is why. Why is torture wrong if it will not save many lives? Is it because of the consequences of allowing torture? Is it because of an edict from God? Is it because of a derivation from the categorical imperative? Is it because such a rule is pragmatic? The answer to why determines if it's consequentialism or not.

Yes, the scenarios with torture of the innocent are fictional, though of course history is completely full of scenarios where people murdered the innocent to better their own group. Of course we often call that evil from our vantage point but they didn't think so at the time. Even today, the Chinese are torturing, imprisoning etc. the Uighurs "for their own good" and for the good of China. They say the torture was unintended but they broadly defend the reeducation camps.

Usually the rules in a non-consequentialist philosophy say an act is evil no matter how much good it generates. What people seem to do is say, "I accept I will be to some degree evil, because the benefit is just so good." Which is to be a consequentialist.

What is fictional is a scenario where you have to torture an innocent person to save billions.

What is not fictional is a scenario where someone thinks they can save billions by torturing an innocent person.

This, of course, is one of the reasons why consequentialism is evil.

"Usually the rules in a non-consequentialist philosophy say an act is evil no matter how much good it generates."

I have no idea where you're getting that from.

"What people seem to do is say, 'I accept I will be to some degree evil, because the benefit is just so good.' Which is to be a consequentialist."

Sounds more like pragmatist to me. To be a consequentialist doesn't mean that you accept evil. It means that an act that has good consequences (or, in rule consequentialism, an act that follows a rule that has good consequences) is not evil in the first place.

As I've said, you can use the term "consequentialist" however you want. But the way you're using it is not the way that others use it.

Well, you are not quite correct there. You are correct that to formally qualify as consequentialist, a theory only considers consequences. You are incorrect in ignoring that many people use a much looser definition than that, whether you agree with their loose definition or not.

However, I am not attempting to loosen the definition. Rather I am saying that most people's moral theories are not pure at all, and it is possible to judge them in other than a binary way on how consequentialist they are. That it is useful to say that one theory is more consequentialist than another, which you can't do if you declare a theory is either precisely consequentialist, or not at all. If you prefer, we could say that you have somebody with a technically non-C moral theory who nonetheless borrows to a lesser or large degree from consequentialism (and more to the point does so for consequentialist reasons, ie. that one thing has better end results than another.) I find this useful. You may not.

Thus a person might have a system that mostly looks like utilitarianism, but contains exceptions, and another person might have a system that mostly looks like a Rawlsian veil, but deviates a bit etc.

Who are these "many people"? They're not philosophers, are they?

I don't see anything useful in saying that one theory is more consequentialist than another. I don't even understand what that would mean. Are you saying "theory" when you mean "set of behaviors"?

Most people don't have moral theories.

Thanks for your attention to this issue, you have done a very good job presenting differing approaches. I do suspect though that in the case of teen drivers, the experience gained is only part of the equation to them becoming better drivers. Executive functioning plays a big role also, and have heard that many men especially do not mature in this until age 24. Reduced executive function can also play a role in the decision to “take the keys away” from the extreme elderly, and they have a tremendous amount of experience. Not sure what the analog of executive function is for robocars or how to develop it, but your persuasion could be more powerful if you acknowledge this as a factor.

Add new comment