Consequentialism: harming one to save many

Some moral questions seem obvious. Others seem not. Moral philosophy, at the level we’ll be discussing it in these next few lectures, is about the questions that aren’t obvious–and sometimes, the questions that just might not be as obvious as they seem. Later, we’ll discuss things such as whether there’s really any such thing as right and wrong, but for now I want us all to adopt the mindset that people adopt when they go onto newspaper op-ed pages or blogs and debate the morality of things like abortion or torture.

I’ll start by presenting a moral dilemma that is not only not obvious, but may seem bizarre. I do this to see what you’re gut instincts are, before you’ve had any moral philosophy. Here it goes:

There are six people trapped on a train track. A train is heading towards them, and will kill them if you don’t do anything. You have several options:

(1) You can do nothing, guaranteeing that the six people die.
(2) You can flip the switch onto a second track, allowing the six people to live. However, there are also three people trapped on that other track. Flipping the switch will save the six, but kill the three.
(3) You can flip a second switch, not redirecting the original train car but rather redirecting a second train car, which will hit the first, derailing it and saving the six. However, there are two people trapped in that train car, and the way they are built taking this option will guarantee their deaths.
(4) You can use a remote control that you have somehow gotten ahold of to take control of the roller skates of a very fat man, up near where the first trolley currently is. Using this remote control, you can send him into the path of the trolley. His bulk is great enough to stop the trolley, no doubt about that, but you are also sure he would be killed by doing this.

I emphasize: these are your only four options, and in each case the stated outcome is guaranteed. What do you do?

Now that I’ve got your responses, let me explain to you why I would ask such a bizarre question. I guess the story starts in the late 18th century with a guy named Jeremy Bentham, who argued that to do right is whatever produced the greatest balance of good over evil. His ideas were further developed by John Stuart Mill in the middle of the 19th century. This view has been called “utilitarianism” or “consequentialism.” “Utilitarianism” tends to be associated with the view that “good” means a balance of pleasure over pain (Bentham’s idea) or perhaps some more sophisticated “happiness” (Mill’s idea). More recently, philosophers have gotten talking about “preference satisfaction,” and a lot of other things besides. “Consequentialism” is like “utilitarianism,” except it tends to be more neutral with regards to the question of what the goods are. “Consequentialism” is the term I’ll mainly be using in these lectures.

Let’s assume that whatever the basic good things are, all the people involved in the above moral dilemma are equally likely to get them at any given amount, if they your involvement in the moral dilemma. Given this assumption, if you accept consequentialism, you should kill the fat man on roller skates, because that allows the most people to survive.

Now we can explain the history of the convoluted moral dilemma I gave above. Consequentialism has long been thought objectionable, because there are some cases where it would be okay to let someone die for the sake of some greater good or preventing some greater evil, but would not be okay to kill someone for the same purpose. This is called the doing/allowing distinction. Consequentialism says the doing/allowing distinction shouldn’t exist.

Well, then, in the 60′s and 70′s, the ethicists Philippa Foot and Judith Jarvis Thompson developed an example which seemed to show that this couldn’t be the whole story. The example was like the one I gave you, but simpler: your only options are to do nothing, or to deflect the runaway train towards a smaller number of people. Originally the numbers were five and one. Note that the idea of having to choose between different people’s lives, in and of itself isn’t that far out there: for example, in the aftermath of Hurricane Katrina, a National Guard member was quoted as saying, “I would be looking at a family of two on one roof and maybe a family of six on another roof, and I would have to make a decision who to rescue.”

Now the original, simple trolley problem, where you just have the option of redirecting the trolley from the five to the one, is striking because it seems like in that case, it’s okay to actively cause the death of someone to save a larger group of people. In psychological studies, 90% of people agree that in this case its okay to kill the one to save the five. But there are still cases to where that doesn’t seem okay, such as when you have the option of framing and executing someone to stop a deadly riot, or killing someone to harvest their organs to save five other people. Another case where it seems like it isn’t okay to kill one to save five is if there are five people trapped on a track, and your option is to push a fat man onto the track to stop the train car. When ordinary people, people who haven’t thought about it much before, are given that question, 90% of them say that in that case, it isn’t okay to kill one to save five.

The question about the general principles behind these examples isn’t purely academic. A lot of people have thought that in at least some cases, its okay to do something that would normally be wrong because the consequences of doing it will be good. For example, in WWII the U.S. airforce killed over a hundred thousand Japanese civilians, most famously though the atomic bombs, but also through large-scale firebombing. Some people argue that this was justified, because even more people would have died in a land invasion. Or, some people argue that it’s OK to torture terror suspects for information, if doing so is likely to save a significant number of lives.

People argue about whether the facts of these cases really are as alleged. But suppose we accept that these really are cases where killing civilians, or torturing someone, could save an awful lot of lives. The consequentialist answer is that yes, you should do what it takes to save the greater number of lives. And the consequentialist can press the point by imagining a case where the lives you stand to save is very great: where a land invasion would cost the lives of a million U.S. soldiers alone, in the Japan case, or where the information you need from the terror suspect is how to stop a nuclear weapon set to go off in a major city. As an aside, if you don’t think the costs of a land invasion would have been very important to avoid, I recommend watching any of the more realistic war movies, say the opening sequence of Saving Private Ryan. Those should convince you that a land invasion costing a million lives really would have been a horrible thing.

However, you can accept the consequentialists point about the extreme cases without accepting consequentialism. You can accept that the ends justify the means in extreme cases without admitting they do in every case. It is interesting to note, though, that many leading critics of consequentialism have not been willing to allow that extreme possibilities count for much. The philosopher G.E.M. Anscombe, who coined the term “consequentialism” in order to criticize it, actively opposed an attempt to give Harry Truman an honorary degree at Oxford, because she thought nothing could justify the dropping of the atomic bomb.

Final thought on these cases: we should worry that national bias could skew our gut reactions here. Killing civilians, and torturing prisoners, aren’t things we’re inclined to give cost-benefit analysis when our enemies do it, yet we’re more willing to when the U. S. government does. This bias doesn’t prove anything in and of itself, but it is reason for caution.

Now the sort of pushing-the-fat-man-to-stop-the-trolley cases make us want to formulate some kind of principle that will rule out that, but will allow us to deflect a trolley to where it will kill fewer people. And this is where the very first moral dilemma, from the philosopher Peter Unger. Unger is very sympathetic to consequentialism, and his idea was that if you give people an option in between the push-the-fat-man option and the redirect-the-train-car option, they would see you can’t really draw a clear line between them.

Consider a further illustration: some people think the difference between push-the-fat-man and redirect-the-trolley is that in one case, the person who dies is a means to your end, and in one case, their death is merely a side-effect of your action. The underlying idea is called the “doctrine of double effect,” just that you can’t kill someone as a means to a good end, but you can take an action which will have their death as a merely forseen side-effect. The doctrine of double effect has an answer, of sorts, to Unger’s complicated moral dilemma: even when you’re redirecting the trolley car with the two people, you’re still using them. So redirecting that trolley car is impermissible.

How many find this result plausible? How many think that yes, it might be okay to redirect a trolley in one way to save lives, but not redirect a different trolley in a different way to save lives? How many don’t find that plausible? There are some distinct worries you might have here: are you using the people in the trolley, or just using the trolley. Does this distinction even make any sense? I confess I have some doubts.

But there may be an even better way of attacking the doctrine of double-effect: imagine a version of the trolley problem like the switching problem except that the track loops around in some way. There are a couple of different way to do this:

Full loop: The leg of the track with the one person, and the leg with the five people, are connected around back. The one person is fat enough to stop the trolley, the five numerous enough to stop the trolley. But if you sent the trolley around one end of the loop and there wasn’t anyone there, it would curve around back and kill the people on the other end.

Half-loop: In this version, if you sent the trolley towards where the five are, but they weren’t there, no one would die. However, the one fat man is on a side track, which rejoins the main one before the point where the five are trapped. So if you sent the trolley onto the fat man’s side track, but he wasn’t there, the trolley would continue on to kill the five.

In both cases, we’re assuming that if you do nothing, five people die. Now, how many people think you could flip the switch, kill the one, in the full-loop case? What about the half-loop? In one study using the half-loop, people were divided about evenly on what you can do. It’s easy to see why people woul be divided: it plays havoc with our double-effect type intuitions. In a way, it seems like if you send the trolley towards the one guy, you’re using him to save the five. Yet how can an added length of track that never actually comes into play change our moral judgments? Why should it?

It’s worth trying to summarize the logic of these situations. The anti-consequentialist case is simple: It says, “Here’s a case where if consequentialism is true, you ought to kill someone. But clearly you ought not kill the person in that case. Therefore, consequentialism is false.” The logic is impeccable: if the first two claims are true, the second must be true. The consequentialist case is a little trickier. Rejecting consequentialism doesn’t lend itself to any hard commitments, just that consequentialism goes wrong somewhere. But the consequentialist can look at specific anti-consequentialist claims and try to provide cases where they would seem clearly false. Thereby, the consequentialist can try to cast doubt on whether any good for of anti-consequentialism can be stated. You get a sort of rhetorical stand off, that comes down largely to how strong your gut reactions to various cases are.

I admit I’ve relied on some pretty exotic examples. Next up, we’ll continue with the theme of consequentialism with cases that hit a little closer to home: abortion, euthanasia, and famine relief. Then I’ll try to say something about what role abstract principles and reasoning can have in our real-world judgments.

Book resources:
Peter Unger, /Living High and Letting Die: Our Illusion of Innocence/ New York: Oxford University Press, 1996.
Marc D. Hauser, /Moral Minds: The Nature of Right and Wrong/ New York: Harper Perennial, 2006. (On moral psychology, source of National Guardsman quote)

Share
Leave a comment

10 Comments.

  1. I tend to think actively killing people is worse than passively letting them die. Am I responsible for the deaths of all those who starved that I could have afforded to feed while still retaining enough for myself?

  2. Chris Hallquist

    I guess one thing I should have made clearer: You can think that the doing/allowing distinction is sometimes important, without thinking its the whole story of what’s wrong with consequentialism (and therefore, for example, think its not key to the trolley problem). Next week will have more doing/allowing stuff.

  3. I think it’s ironic that the Nagasaki bomb was called Fat Man.

  4. I think there is some consideration (even if we try to ignore it) of criminal responsibility. I know if I kill the fat man I’d face a serious risk of jail – so I want my morality to instruct me not to do it so I don’t have to go to jail.

    same thing for Josh’s feeding the poor issue.

    Do we have the same intuitions where the chance of danger is very low (below that of criminal responsibility) or the magnitude of the harm is very low?

  5. * I note those studies that show huge shifts in public opinion on ethical issues when the law changes.

    ** I also note that laws by their nature need to be a bit deontological even if your trying to achieve concequentialism.

  6. It seems to me that deontological moral systems violate intuition under extremes more than consequentialist: If something is “just right” or “just wrong” then is it not the case that just thinking it makes one “good” or “bad” and deserving of punishment?!?

    Seems to me that deontological systems, in the extreme, end up at thought crime!

    How is that better than the extremes of consequentialism?

  7. I got lost in the description of the full-loop and half-loop cases – anyone got a diagram?