The sacrifice of the innocent

We can choose the lesser of two evils,<br />but we cannot always avoid regret
We can choose the lesser of two evils,
but we cannot always avoid regret

by Jeremy W Bowman

[This paper was originally delivered as an informal address to the Icelandic Philosophical Society, September 11, 1994]

Is it ever morally right to sacrifice an innocent person for the greater good of some other people? Our immediate reaction might be to say No. But I will argue that in some circumstances, which are not especially unusual, it is morally right. In the most ordinary sense of the word, sacrifice is often morally obligatory.

Cases in which an innocent person is sacrificed are philosophically interesting, but academic philosophers don't like to discuss them unless they can be used as obvious examples of moral wrongdoing. The usual way in which the sacrifice of an innocent person enters philosophical discussion is as follows. We "test" our moral theories by examining what they tell us about how we ought to behave. If a moral theory tells us to sacrifice an innocent person in a situation where it is obviously morally wrong, then we must abandon that moral theory and look for a new one.

That is fine as far as it goes, but it doesn't go far enough: it only tells half the story. Below, I will give a few examples of situations in which the sacrifice of the innocent is not obviously morally wrong. Regrettable and tragic though these cases of sacrifice certainly are, they are actually morally right. Any such case must be regarded as the "test" of a moral theory, just as much as the more familiar "tests" in which sacrifice is morally wrong. For a moral theory to be acceptable, it mustn't just tell us not to sacrifice innocent people when it is obviously morally wrong to do so — it must also tell us that we should sacrifice them when it is morally right to do so. So a moral theory can fail such a "test" in at least two distinct ways. And when a moral theory is mute — where it offers no guidance at all on awkward matters of life and death — we must say that it fails in yet a third way. We look to it for help, but it doesn't give us any.

Most of the moral theories currently in circulation fail in this third way. It seems that their proponents would rather keep silent than admit that we are sometimes morally obliged to do very awkward, painful, or unpopular things. That partly explains the enduring popularity of such theories: they are pleasantly undemanding. Academic philosophers usually overlook this sort of failure because of a narcissistic assumption that morally good behaviour means never having to "dirty one's hands" with a cold or heartless-looking act.

However, I believe there is a moral theory that doesn't fail in any of the ways just described. And it has numerous other virtues as well. It is called "preference utilitarianism", and one of my main objectives is to explain and to defend it.


What is a Moral Theory?

I   have been using the expression 'moral theory' over and over again. What do I mean by a moral theory? — When we make reasoned moral judgements, we implicitly appeal to our basic moral principles. A moral principle works a bit like the definition of a word — a definition of the sort that can be used to make decisions about whether the word in question properly applies. For example, suppose we come across a quince for the first time, and wonder whether it is a citrus fruit. The issue might be settled by appealing to an explicit definition of 'citrus fruit', which tells us that citrus fruits have segments. But a quince doesn't have segments, so the term 'citrus fruit' doesn't properly apply to it. In its modest way, this is a sort of discovery, and it was facilitated by an explicit definition of a word.

In effect, a moral principle is an explicit "definition" of morally right action, and we can use it to judge whether any particular act falls under that heading. So if we are contemplating doing something, and wonder whether it is morally right or wrong, an appeal to a moral principle might decide the matter. The question whether a quince is a citrus fruit was decided in a similar way by the definition of 'citrus fruit'. But just as a definition must be rejected if it fails to agree with the way a word is properly used, a moral principle must be rejected if it fails to agree with our most strongly held moral intuitions.


Divine Commands Theory

An example may help here. Many people — perhaps the majority of the world's population — base their moral judgements on (what they understand as) God's commands. In other words, they accept the following principle:

(DC) An action is morally right if and only if God commands it.

To use this principle to make moral judgements, whoever accepts it would have to have some additional beliefs. For example, a Jew or a Christian would normally believe that God exists, that God is for the most part benevolent, that the Ten Commandments are an expression of God's commands, etc. And he would probably have some further views about why we ought to do what God commands, about the nature of morality, and so on. These additional beliefs and assumptions are the "packaging" that has to be accepted along with the central principle (i.e. DC, above). Its finer details may differ slightly from one individual to the next — for example, one person may think that God is all-good but not all-powerful, while another thinks he is all-powerful but not all-good — but the central principle (DC) is the shared core.

Every central principle — moral or otherwise — goes along with some extra "packaging" like that. In other words, every such principle is embedded in a theory. This is perhaps clearest in the case of scientific law. No scientific law can ever be adopted or applied on its own. It can only be used in conjunction with some further laws and assumptions about the world. For example, take Newton's Law of Gravity (often called the "Universal Law of Gravitation"). No one could apply this law on its own to make predictions about falling apples, orbiting planets, etc. Newton himself applied it in conjunction with his three other laws of motion, some further "Newtonian" assumptions about the nature of space and time, and some mathematical techniques for applying them to the real world. Together with his central Law of Gravity, these further laws and assumptions formed an integrated "package" — which we may as well call Newton's "theory" of gravity.

It is because moral principles come in "packages" like scientific laws that I have been using the word 'theory' to refer to any such "package" as a whole. But we mustn't take the analogy between moral and scientific theories too far. Scientific laws (like Newton's law of gravity) are hypotheses that purport to describe the world. That is, they are attempts to "guess" at the way things actually are. It is reasonable to suppose that a scientific hypothesis is either true or false, depending on whether it describes things accurately or otherwise. But many philosophers doubt that moral principles are true or false in that straightforward way. On the face of it, a moral principle seems to prescribe human behaviour like an order (e.g. "Shut the door!") rather than describing the way things actually are like a declarative sentence (e.g. "The door is shut."). The appropriateness or otherwise of an order (i.e. a prescription) seems to have a different flavour from that of the truth or falsity of an ordinary sentence (i.e. a description).

The "package" I am trying to sell — preference utilitarianism — contains a central moral principle, and some packaging, as with every theory. But the packaging contains some quite radical revisions to our familiar ways of thinking about right and wrong. It also contains some radical revisions to our traditional ways of thinking about the mind. Much of the rest of this essay is concerned with explaining and defending these revisions against the parochialism and reactionary attitudes of our moral traditions.

Let us return to the "test" of a moral theory. The moral theory just described (whose core principle is DC, above) is called the "divine commands" theory. I chose it as an example because I think it can be used to show how we reject a moral theory if tells us to do something that our moral intuitions tell us is obviously wrong. How might this happen?

Consider the Old Testament story of Abraham and Isaac. Many people feel uneasy about the "divine commands" theory when they think about that story. The theory says that whatever God commands is morally right, so when God commanded Abraham to sacrifice his son Isaac, the theory says that that particular act must be morally right. But, many people feel, an action of that kind couldn't conceivably be morally right: so the theory must be wrong. The intuition that it is wrong for a father to sacrifice his son seems to be more powerful, or more trustworthy, than the opposing intuition that morality is a matter of following God's commands.

Perhaps that explains why philosophers have never taken the "divine commands" theory very seriously. Quite apart from the story of Abraham and Isaac, which illustrates how arbitrary God's commands might be, most philosophers think that there must be some independent way of telling which human actions are right or wrong. (If there weren't, how could we tell with any confidence that they were God's commands rather than Satan's?)

So here we leave the divine commands theory. The important point is this. A proposed "sacrifice of the innocent" was felt to be so morally repugnant that it led to the rejection of the theory that proposed it.

As it happens, the very same pattern recurs, but this time with a theory that philosophers do take seriously: classical utilitarianism.


Classical Utilitarianism

The next moral theory I want to consider is very simple, in the sense that it can be learned in a minute or two. Yet is also very powerful, in the sense that it enables us to make moral judgements about almost any human action at all. These are both highly attractive features of any theory.

And it has other attractive features as well. One such feature is that it explains how a person might be motivated to do the morally right thing. Utilitarianism's most famous modern proponents, J.S. Mill and Jeremy Bentham, were justly very proud of this feature of their theory.

Here is how it works.

Look at any individual human action, taking account of the circumstances in which it is performed. Try to estimate the likely consequences of the action. Some actions lead to an overall increase in the amount of pleasure (or a decrease in the amount of suffering) for all those affected: these actions are morally right. Other actions lead to an overall decrease in the amount of pleasure (or an increase in the amount of suffering) for those affected: these actions are morally wrong.

Sometimes every possible course of action, including that of not acting at all, leads to a decrease in pleasure or an increase in suffering. In such a case, the morally right action is the "least of a number of evils".

Since nothing matters except an action's consequences, the morality of any action is just the same thing as its utility, or "usefulness" in bringing about the specified end. For classical utilitarianism, a kind of hedonism, the end is pleasure, or the elimination of suffering. It is aimed at pleasure in general, where everyone affected counts for the same. (The agent doesn't count for more or less than anyone else, so this is not "selfish" hedonism.)

Classical utilitarianism's basic moral principle, its "definition" of morally right action, would go like this:

(CU) An action is morally right if and only if it tends to maximize pleasure.

Why would a person adopt such a principle? Or, in other words, what is the motivation for acting morally? We all enjoy our own pleasure, and hate our own pain. It is a short step from here to saying that we always want to get as much pleasure as we possibly can, and to avoid pain whenever we can. Furthermore, many of us have "fellow feelings" for other creatures. Those among us who have these feelings will want to try to maximize pleasure, no matter whose pleasure it is. In other words, they will want to do the morally right thing.

Not everyone feels that way, of course, but perhaps the right sort of influences in childhood might help bring more people to feel that way in the future.

Let us now apply the theory to real life. Classical utilitarianism would say that the following sorts of action are almost always morally right:

· giving to charity

· being kind to animals

· maintaining a reasonably equitable distribution of goods

· giving individuals in society "private space" in which to pursue activities that they enjoy

· any sexual practice, as long as it is done in private between consenting adults

· voluntary euthanasia

So far, we have compiled a list that most modern liberals would be happy to endorse.

However, despite all these agreeable features, there is something terribly wrong with classical utilitarianism: like divine commands theory, it seems to tell us to do things that are obviously morally wrong. It says that, under certain circumstances, we ought to "sacrifice the one for the many".

For example, imagine a friendless, homeless down and out, who suffers the mental anguish of his lot and, let's say, the physical pain of an illness. He is a burden to society and he suffers a great deal. However, he has a healthy liver, kidneys, heart, and other internal organs. Consider the action of killing him painlessly, without warning, in order to use his body parts for transplanting, to relieve the suffering of a number of others. Since this action would lead to a decrease in the net amount of suffering of those affected, and to an overall increase in pleasure, classical utilitarianism would have to say that such an action is morally right.

I don't know of anyone who would be prepared to accept this conclusion.

This problem has led most present day philosophers to reject utilitarianism, and to adopt a very different sort of moral theory, usually one that takes morality to be essentially a matter of following abstract rules and regulations.

Now I do not think that morality has much to do with following rules and regulations. Utilitarianism does not rely on such things, so I would like to see it somehow getting over this problem.

But we mustn't underestimate our difficulty here. Classical utilitarianism doesn't just tell us to sacrifice people's lives. It also seems to demand the sacrifice of freedom and happiness, as long as the numbers of those adversely affected are relatively small. For example, imagine a slave society in which a small number of slaves serve a large number of slave owners, thereby bringing them great amounts of pleasure. If, on balance, the outcome of this arrangement is more pleasure and less suffering than any other arrangement, then classical utilitarianism seems to say that it is morally right.

I take it that this result is completely unacceptable to everyone. It would certainly not have been welcomed by J.S. Mill or Jeremy Bentham, both vigorous opponents of slavery.

However, I think a very simple solution to the problem is already available. We should not reject utilitarianism, but modify it. The modification is very basic, but it can be made cleanly. The modified theory is even simpler and more powerful than its predecessor. It explains the motivation for moral action better than its predecessor. And it meshes with the best philosophical understanding of the mind that we have so far.

To the best of my knowledge, the modification was first explicitly spelled out in 1979 by Peter Singer in his book Practical Ethics. But the main idea can be found in the writings of J.S. Mill, and there are some intriguing hints in the writings of Edmund Burke. Singer, however, is the one who gives the theory a name: preference utilitarianism.


Preference Utilitarianism

Singer's moral theory is just like classical utilitarianism in most respects. Actions are still to be judged solely on their likely consequences. The morality of an action is still equated with its utility, that is, with its "usefulness" in helping to bring about some specified end.

The difference between the two theories is this. Classical utilitarianism was hedonistic: it aimed at pleasure. Preference utilitarianism, by contrast, aims at something rather different, namely, the satisfaction of desires.

So the modification is a simple alteration to the theory's basic moral principle. All we have to do is change our earlier "definition" of a morally right action as follows:

(PU) An action is morally right if and only if it tends to maximize the satisfaction of desires.

By "desire", I mean any state of the mind that causes us to move towards some more or less determinate goal. It might be anything from the mildest inclination to the most unshakable determination. It might be accompanied by powerful feelings of physical lust or emotional longing, or by a purely intellectual sense of duty, or anything in between. It might even be something completely unconscious, in the sense that the agent might not know that his own behaviour is in fact directed towards some goal or other. I take 'desire' and 'want' to be roughly synonymous, but we use many words to express desires.

The most important thing to notice about desires, so understood, is that they come in various strengths. This helps to explain why we have so many words for desire. A very weak desire might be a mere "inclination", whereas a very strong desire might be a "need".

If we aim to maximize the satisfaction of desires, following preference utilitarianism, we must take account of how strong they are, as well as taking account of how many people have them. To find out the morally right thing to do, we have to calculate which of various courses of action is likely to lead to the greatest overall satisfaction of desires. Sometimes this means satisfying one big desire and thwarting a number of lesser desires. But if the desires in question are of roughly equal strength, the numbers of those who have them will matter more.

"Preference" utilitarianism is so called because the strength of our desires is revealed in our preferences. In general, an agent has a stronger desire for A than for B if he prefers courses of action that bring A closer to those that bring B closer.

Let's take a few examples to illustrate this idea.

1. Adolf Hitler once remarked that he would prefer have two or three of his own teeth extracted than to go through another meeting with General Franco. Even if he was not being completely honest, this remark says quite a lot about Hitler's mind, by revealing the nature and strength of some of his desires.

2. In the 1960s, an American cigarette manufacturer adopted the following advertising slogan: "I'd walk a mile for a Camel". Even committed smokers would probably only walk a mile for a packet of twenty Camels. So the slogan suggested that Camel cigarettes are twenty times more desirable than they really are. The link between preference and strength of desire enabled the slogan to make this suggestion.

3. Here's a common enough scenario: a man drives home from work, bringing his weekly pay packet back to his wife and children. On the way, he passes various sources of pleasure and objects of desire — prostitutes, a bar, a restaurant, bookshops, etc. If he passes them by, it is because his desires for transient physical or intellectual pleasure are weaker than his desire for the security of his family.

Examples like these suggest a method for measuring the strength of a person's desires. Examine his behaviour, and see which desires override which.

We could adopt a very similar method for measuring the strength of materials, by seeing which can cut through which. For example, steel is stronger than wood, because it is always able to cut through it, but wood is stronger than butter, for the same reason, and butter is stronger than air.

Analogously, in a normal person, the desire to stay alive is stronger than the desire to have a new car, and the desire to have a new car is stronger than the desire to buy today's newspaper, and the desire to buy the newspaper is stronger than the desire to stay indoors all day.

We can apply this method of measuring the strength of desires to animals as well as to humans. How can we tell when a horse's desire to relieve its thirst is stronger than its desire to stay away from a fire? When it decides to run through the blazing stable in order to reach the water.

I would claim that preference utilitarianism is even simpler and more powerful than classical utilitarianism, and that it explains moral motivation better:

The modified theory is simpler because the concept of desire, as I have defined it, is simpler than the concept of pleasure.

The modified theory is more powerful because it can be applied to a wider range of situations: desire is more readily observable than pleasure. (I just described how to measure the strength of an animal's desire: but how could we possibly measure quantities of an animal's pleasure?)

Finally, the modified theory gives us a cleaner account of moral motivation. The previous version relied on a dubious claim that we always want to do things that bring us pleasure. But sometimes we defer pleasure, or do masochistic things. The new version relies on the trivial truth that we always want to do what we desire to do. If we have the sort of "fellow feeling" that makes us see the goals of others as desirable, then we have a reason to do the morally right thing.

This brings me to one of the central points of my talk. Some desires are so much stronger than others that we must regard them as being of a different order of magnitude. Just as a steel knife will cut through any amount of butter, no matter how great the quantity, the normal person's desire to remain alive is stronger than other normal people's desires to buy today's newspaper, even when added together. There is no multitude of people, however large, whose collective desire to buy today's paper could be greater than the normal individual's desire to remain alive.

If desires can be of different orders of magnitude as I have just suggested, but this fact is not widely recognized, then certain questions, which seem perfectly meaningful, may yet fail to have a determinate answer. For example: How much money would a normal person accept in exchange for the life of one of his children? There is no determinate answer to this question. In most people, the desire for the safety of one's children is immeasurably greater than the desire to be rich.

Desires for one's own life and liberty, and for the life and liberty of one's children, are typically immeasurably stronger than desires for creature comforts.

We can apply this idea to our earlier problem of the sacrifice of the friendless down and out as follows. If he is not suicidal, we may suppose that the strength of his desire to live is much greater than the combined strength of the desires of all those who would benefit from his death. Even if there were a hundred of them, or a thousand of them, his desire would be greater than all of theirs added together, because it is of a different order of magnitude. His desire is to continue living, while their desires are merely for greater physical comfort.

But much more important than his desire to live is his desire to retain his autonomy, to be the one who decides whether he lives or dies, and if so when. We would have to take account of this desire even if he were suicidal. Indeed, many a suicide attempt might be understood as an assertion of autonomy, that is, as an act of deciding the hour and place and character of one's death for oneself. If this is true, it suggests that the desire to be autonomous, to control one's own life, is often even stronger than the desire to live.

This desire for autonomy is perhaps our most distinctively human characteristic. It is immeasurably stronger than most of the desires it might be set against in the sort of utilitarian calculations we have been considering.

How would our modified theory treat the case of the down and out? Preference utilitarianism says it is morally right to maximize the satisfaction of desires. Given their relative strengths, it is morally right to make sure that the down and out's desire is satisfied, while the transplant recipients' desires are thwarted. So, unlike classical utilitarianism, our new theory does not recommend the sacrifice of the innocent in this case.

Similar considerations apply to the example of the slave society. The desires of slaves to be free would always outweigh the desires of slave owners to keep slaves, however tiny the number of slaves or how large the number of slaveowners. That is because, in general, the desire to be free and autonomous is immeasurably stronger than the desire to have other people do work for you. So preference utilitarianism says that slave societies are morally wrong.

If human beings were very different, and we did not have this overriding desire to retain control of our own lives, then actions which take a person's control away would not be so terribly wrong. But taking us as we actually are, preference utilitarianism does say that such actions are terribly wrong. This is why crimes like rape and kidnapping are in the same league, morally, as murder.


Conclusion

In most circumstances, preference utilitarianism agrees with common sense and tells us that it is morally wrong to sacrifice innocent people. The examples that prove so destructive to classical utilitarianism do not pose a threat to preference utilitarianism.

However, it has to be said that preference utilitarianism does tell us that sometimes we are morally obliged to sacrifice innocent people. I have chosen four examples to illustrate this point.

1. In William Styron's novel Sophie's Choice, the eponymous heroine is a prisoner in a Nazi concentration camp. The crux of the story occurs when she is forced to choose between her two children. That is, she must decide which of the two is to taken away to be killed in the gas chambers, and which is to be allowed to escape with her. In making her choice, she thereby saves one of her children; but her choice is no less the sacrifice of an innocent person for that.

Preference utilitarianism would say that her action was not immoral, since either option would thwart roughly equally strong desires, whereas refusing to make a choice at all would have thwarted even stronger desires. To make a choice was the lesser of two evils, and so it was morally obligatory.

Of course, the actions of those who forced her to choose were extremely morally wrong, as they would inevitably led to the thwarting of the strongest desires of at least two people.

This story comes from a novel, but I would be surprised if situations like this imaginary one had not occurred during the Second World War.

2. In 1989, a fire broke out in the reactor room of a Soviet nuclear submarine K-278 Komsomolets, when it was at sea. Horrific scenes followed. Fire and radiation spread quickly to many other parts of the submarine. In order to save themselves, crew members in Compartment 10 closed the bulkhead door to Compartment 9, thereby condemning its occupants to death. If they had not closed the door when they did, perhaps a few more men might have made it into Compartment 10, but the delay would almost certainly have meant the death of everyone in Compartment 10 as well. The action of closing the door was the sacrifice of innocent people, or so it seemed to those who did it.

But again, preference utilitarianism would say that the men in Compartment 10 did the morally right thing. Everyone involved had strong desires to continue living. The closing of the bulkhead door thwarted many of these desires. But leaving it open would have thwarted all of them.

3. Every year, hundreds of people are killed in road accidents in Ireland. These innocent people are "sacrificed", in a sense, because although we know that roughly this number of innocent people will die, and that their deaths could be prevented by simply banning all road traffic, we choose the course of action that leads to their deaths. We do so for no more noble reason than that we want to be able to make everyday journeys quickly and easily.

Preference utilitarianism would defend this decision as well. The crucial factor is not that that the risks are very slight, but that they are undertaken willingly. Suppose driving were much more dangerous than it actually is, so that getting into a car was practically an act of suicide. Preference utilitarianism would still say that it is morally right to allow people to drive, as long as they are doing it willingly.

Since present arrangements allow drivers to do what they want to do, and alternative arrangements would prevent them from doing what they want to do, preference utilitarianism says that the arrangements should stay as they are.

4. My final example is too complicated to go into in much detail. I will outline the main ideas. It has to do with our arrangements for protecting ourselves from criminals, and how we protect innocent people from those very arrangements.

Any system of criminal justice, if it is to successfully put criminals into prison, runs some risk, even if very slight, of putting innocent people into prison as well. As the years pass, and thousands upon thousands of cases are processed, even the very best system we can conceive of will inevitably "sacrifice" some innocent people.

The only way to be absolutely sure that no innocent people are sacrificed would be to have no system of criminal justice at all. That would lead to an explosion of crime, including murder, which would be another, worse kind of sacrifice of innocent people.

So the only real option available is one of various possible systems along a spectrum ranging from the very mild to the very severe. A system is "mild" if it puts away a small proportion of criminals. This sort of system would sacrifice a correspondingly small number of innocents, but it would permit a correspondingly large number of innocents to fall victim to crime.

A system is "severe" if it puts away a large proportion of criminals. This sort of system would sacrifice a correspondingly larger number of innocents than the mild system, but it would permit a correspondingly smaller number of innocents to fall victim to crime.

How severe should a system of criminal justice be? Preference utilitarianism can help us to answer this question. It depends greatly on circumstances. Among these would be: how strongly we want to avoid becoming victims of serious crime; how strongly we want to avoid being imprisoned for crimes we did not commit; how much we mind being imprisoned for crimes we did commit; and so on.

These desires can reasonably enter the calculations because their relatives strengths can be compared: they are of the same order of magnitude because they all have to do with life and liberty.

It should be possible, in principle anyway, to estimate the degree of severity of the system that corresponds to the greatest satisfaction of desires. Preference utilitarianism says we ought to maintain a system with just that degree of severity.

In extreme circumstances, where there is a lot of crime, the system may need to be very severe indeed, and for certain kinds of crime we may even be obliged to suspend "normal" procedures such as trial by jury. So be it. As long as our arrangements maximize the satisfaction of desires, where these have been given due weighting according to their strength, we are doing what we ought to do.

[The following was written in the early 1990s — JB 2007]

A debate is taking place in Ireland at the moment over whether those thought to be involved in terrorist crimes should be interned without trial. A normal trial by jury is not possible for many terrorist offences, because both witnesses and members of the jury are liable to be intimidated, or worse. For terrorist-type criminal offences, judicial decisions to imprison people are already taken by the so-called "Special Criminal Court", in which three judges work in consultation with high ranking police officers. This is a less-than-ideal alternative to "normal" criminal law courts with sworn witnesses and juries, and the rest of the apparatus of criminal justice that ideally ought to be in place.

Many of those who oppose internment point out that an inevitable consequence would be a greater number of innocents sacrificed by the system. But, as we have seen, such arrangements may be the least of a number of evils, and so might be morally right. The question whether we should or should not fall back on these less-than-ideal arrangements cannot be answered with the simple sweeping statement that it is always wrong to impison people without a full jury trial. We have to take account of unusual circumstances.

The preference utilitarian would thus agree with the Irish philosopher Edmund Burke, who held that "government is a contrivance of human wisdom to provide for human wants". He would also agree with Burke's claim that "circumstances are what render every civil and political scheme beneficial or noxious to mankind". Given any particular set of circumstances, we may want to deny the necessity of extreme measures. But to deny the necessity of sacrificing innocent people under any circumstances would be hypocrisy. Every system of criminal justice entails the sacrifice the innocent.