Morality from pain and pleasure

So far, the options we have considered as sources of morality have been sort of abstract, wishy-washy concepts. Culture is hard to define, religion is hard to interpret, and feelings are unreliable. But pain and pleasure are simple. If something hurts, it’s bad; if it feels good, it is good. It should be much easier to use pain and pleasure to decide what is morally right and wrong. Well, we’ll see.

So lets start with the obvious assumption that pain is bad and pleasure is good. So actions that cause pain are impermissible and actions that course pleasure are permissible (maybe obligatory). This moral theory is called Utilitarianism. It’s the view that morality comes from the pain or pleasure that actions cause. The British philosopher Jeremy Bentham thought that pain and pleasure were the most obvious and most basic motivations for humankind. He thought that it only made sense that morality should be based on them.

Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think: every effort we can make to throw off our subjection, will serve but to demonstrate and confirm it. In words a man may pretend to abjure their empire: but in reality he will remain subject to it all the while.[1]

That’s simple enough, but you can probably already think of complications. What if something causes both pain and pleasure at the same time? What about emotional pain and pleasure? What if someone likes pain? What if someone can’t feel pleasure? What if something is pleasurable at first, but then painful later? Whose pain and pleasure do we have to consider?

Using pain and pleasure to determine morality seems simple at first, but gets complicated fast. Some of these issues are easy to resolve, but some are not so easy. I’ll start with the easiest. How do we account for emotional pain or psychological pain? Utilitarians have a pretty straightforward solution to this. They just consider all those things to be the same. For utilitarians, pain is pain. Physical, psychological, emotional, and intellectual pain are all considered to be the same thing, and likewise for pleasure.

That was easy, so what do we do about the masochist who likes pain, or at least doesn’t mind it? There is a simple solution to this too: when we say “pain”, what we are really talking about is suffering. “Suffering” is just pain that someone doesn’t want. If you stub your toe and you don’t like the shooting pain running up your whole leg, that’s suffering. If you are a runner and you like the burning in your muscles when you run, that’s not suffering. So we just keep in mind that “pain” really means “suffering”. Some pain might be good, but suffering is never good. This also solves the problem of someone who can’t feel pain or pleasure. We just don’t worry about them. If they don’t feel pain, they aren’t suffering and it isn’t bad. If they don’t feel pleasure, then it isn’t good. Utilitarians only care about the pain and pleasure that people experience. If they don’t feel it, or they don’t care about it, it doesn’t count.

What about the question of who counts? Utilitarians have a pretty simple answer to this as well: Everyone. For utilitarians, anyone who can experience pain and pleasure counts, and everyone counts equally. This gets back to the issue of personhood (what things count as persons). For utilitarians, if animals experience pain and pleasure, then they are persons and they count equally with humans. If some humans can’t experience pain and pleasure (like maybe people in comas), then they are not persons and they don’t count. So under utilitarianism, there could be non-human persons, and non-person humans. This seems weird at first, but actually under the other moral theories, the same thing can happen. For example, to a cultural relativist, a hermit who is not included in any culture would be a non-person. For a religious authoritarian, an atheist would be a non-person. They wouldn’t count because they don’t have the thing that matters (depending on the theory) in common with everyone else. Bentham called the collection of all persons (things that count morally) ‘the community’.

IV. The interest of the community is one of the most general expressions that can occur in the phraseology of morals: no wonder that the meaning of it is often lost. When it has a meaning, it is this. The community is a fictitious body, composed of the individual persons who are considered as constituting as it were its members. The interest of the community then is, what?—the sum of the interests of the several members who compose it. …

VI. An action then may be said to be conformable to the principle of utility, or, for shortness sake, to utility, (meaning with respect to the community at large) when the tendency it has to augment the happiness of the community is greater than any it has to diminish it.[2]

Bentham sometimes used ‘happiness’ rather than ‘pleasure’, but he meant the same thing: the best action is the one that creates more pleasure for the community than it reduces pleasure for the community.

But the more complicated issue is the way some actions cause both pain and pleasure, or cause pleasure at first, and pain later. In these cases, utilitarians have to say that you should try to figure as best you can how much total pain and total pleasure the action causes, and then calculate the net difference. So if, for example, exercising causes some short-term physical pain (soreness), but later causes long-term psychological pleasure (satisfaction, confidence) and maybe some physical pleasure (good health), then a utilitarian would say that overall it is good. Bentham has solutions for how to go about calculating these things, but it gets complicated.

To a number of persons, with reference to each of whom the value of a pleasure or a pain is considered, it will be greater or less, according to seven circumstances: to wit, the six preceding ones; viz.

1. Its intensity. [How much pleasure?]

2. Its duration. [How long does it last?]

3. Its certainty or uncertainty. [How sure are you that it will happen?]

4. Its propinquity or remoteness. [How soon will it happen?]

5. Its fecundity. [Does it bring about more pleasures?]

6. Its purity. [Is the pleasure mixed with pain?]

And one other; to wit:

7. Its extent; that is, the number of persons to whom it extends; or (in other words) who are affected by it.[3]

So utilitarians can accommodate different aspects of pains and pleasures, but it means that moral judgments get really complicated really fast.

The thing that makes people uncomfortable with utilitarianism, even if the get past the complexity, is the fact that since everyone counts equally, so if some action causes pleasure for a lot of people, but a lot of pain for some people, it would be permissible, maybe even obligatory. In a lifeboat scenario, for example, where too much weight in the lifeboat will cause it to sink, it would cause a lot of pain to everyone if the lifeboat sinks, so it might be obligatory to throw the heaviest person overboard in order to save the rest. Sacrificing one to save many could be obligatory for utilitarians, and as a utilitarian, it doesn’t matter who that one is. If it’s you, too bad.

The other main problem for utilitarians is that they only care about consequences. You might have noticed that in all the examples above, utilitarians aren’t interested in why someone does an action, or what their intentions are, only the effects that it has. For some people, this is unsatisfying. It might make a difference to you whether someone does an action out of malice, kindness, or completely accidentally. But utilitarians don’t care. All that matters is whether is causes more total pain, or more total pleasure.


[1] Bentham, Jeremy, The Principle of Utility, 1822. Chapter 1, part 1.

[2] Ibid. parts 4 and 6.

[3] Ibid. Chapter 4, part 4