Utilitarianism is silly and you all need to cut it out

What is Utilitarianism?

Ah utilitarianism. The favorite moral theory of autists and engineers everywhere. Try as philosophers might, the theory just won't go away. It's been called Machiavellian, sociopathic, toddler socialist math, and worse yet. But what is utilitarianism exactly?

One model I like for thinking about morality breaks it down into two sub-theories. There's a theory of goodness and a theory of rightness. A theory of goodness describes what's good or valuable (used interchangeable). A theory of rightness can be thought of as some model in which you input things like: facts, probable outcomes, possible actions, values, constraints, etc. and outputs an assignment to each possible action as right, permissible or wrong. Utilitarianism's theory of rightness is pretty straight forward.

Theory of Rightness: An act is right if, and only if, it maximizes the expected good (formalized as a 'utility function'), and wrong otherwise.

This seems pretty reasonable on first glance. More of what's truly valuable or good is better than less of it, all else equal. Without a theory of goodness, however, this theory of action is technical the umbrella term consequentialism. We need to mix in a theory of goodness before we can bake a utilitarianism pie. Utilitarianism's theory of goodness is as follows.

Theory of goodness: The only thing that's valuable is happiness/pleasure/wellbeing, where everyone's counts equally.

(If you like to distinguish between those three terms, pick your favorite, choose-your-own-adventure style. I'm going to use 'wellbeing' since it carries the least baggage.)

Great! We have utilitarianism in our hands. Initially, this theory of action doesn't seem so criminal. In fact, its egalitarian nature is likely appealing to the modern liberal. Furthermore, most would agree with the statements, a) "wellbeing is good", and b) "more of it is better than less of it." In other words, wellbeing should be maximized. a) Theory of goodness? Check. b) Theory of rightness? Check.

As it stands now, utilitarianism actually seems pretty attractive. Sure, you might have an aversion to the hedonic nature of it, but that's fine. At the very least it's debatable, but not reprehensible given a long-view version of hedonism, that is. But if it's still such an issue for you, add in some weighted values besides wellbeing into your theory of goodness. You can add in relationships, autonomy, rights, etc. as values, and, say, weigh autonomy 1000x as much as one 'unit of happiness' however you want to measure that. You can be a value pluralistic consequentialist instead. Build your own moral theory!

If any of what was just said seems somewhat coherent, you might be wondering where philosophers take issue with the theory. What exactly are they saying? Well, rest assured, dear reader, they're not saying anything of value.

Objections to utilitarianism

Up until now, most of the literature arguing against utilitarianism goes something like this:

"Imagine this infinitely unlikely scenario in which the utilitarian calculus says that this clearly morally reprehensible action is morally right (e.g. framing an innocent man to placate a mob, or killing a healthy one for their body parts, for instance). Now does this sound moral to you? 🥺👉👈"

What a ridiculous argument. Imagine if physicists responding to Einstein at the dawn of relativity objected by saying, "So you're saying the faster you move, the more mass you have? Now does that sound physical to you? uwu" Truth is often unexpected and counter intuitive. Unless you're just trying to make all your beliefs consistent—in which case you'd have to strip away almost all of them except for hedonism and optimization—your intuitions don't matter. As the saying goes, facts don't care about your... intuitions.

Why utilitarianism is appealing

Why is it so hard to get rid of this theory? And why is it so appealing to STEM-lords in particular?

Whether it's from from sex, drugs, and rock and roll, or from reading a book, creating something, or cultivating meaningful relationships, wellbeing is clearly good, all else equal. It's even arguable that anything worth something is only valuable in so far as it results in some pleasurable state of consciousness for someone (hint hint, more on this latter). While certainly debatable, I think the techies vibe with this theory of goodness because all of the non-hedonic acts that seem worth doing—all the meaningful acts—are far more gray than black and white. Comparatively, the question of whether someone is experiencing a state of consciousness with a positive valance is significantly more black and white. And if you spend 4 years doing math and programming all day, you're more likely to resonate with black and white theories of goodness.

What about utilitarianism's theory of rightness? Why is it appealing to quants? That's easier to answer. They treat morality as an optimization problem! For those who don't work in technical fields, optimization is one of the fundamental tools in any left-brained individual's tool box. Lagrangian mechanics (that is, almost all classical physics and beyond), machine learning, engineering a product given constraints, etc. These are all optimization problems at bottom. It's therefore natural for one who spends most of their system 2 brain power on optimization problems to also apply that to other problems such as morality.

There's nothing wrong with optimization. In fact, I think optimization is the right approach in ethics. The problem is more subtle: which utility function are we optimizing?

The real problem with utilitarianism

Remember earlier when I said, "more of what's truly valuable or good is better than less of it, all else equal"? And then we proceeded to posit what was truly valuable or good is wellbeing? That's all well and good, but one small question: whose wellbeing? It's easy to nod along to someone saying "wellbeing is good." You do, as a matter of fact, actually care about your wellbeing. But you care about your wellbeing, and that of those close to you (we have evolutionary reasons to not be egoists). In other words, you care about maximizing your local utility function. Not some abstract global utility function.

At this point utilitarians typically say, "if we take the perspective of the universe, it just makes sense to optimize everybody's utility function." First of all, even if it makes sense to 'take the perspective of the universe' in morality (even though that's not even possible in comparatively less gray physics), then it's far more likely that nothing is valuable. The universe doesn't care about value or goodness or wellbeing, it just is. Value only makes sense to talk about from the vantage point of some conscious creature. From our vantage point, no body actually cares about everybody's wellbeing the same. Don't kid yourself, you don't. Should you? Should questions can only be answered by some normative theory like utilitarianism. That would be circular.

Second of all, you or me, or any other individual who's asking the First Question are not the universe! We're part of it, and we're connected to it, sure. But we're not the universe as a whole. So why should I take the perspective of the something I'm not when making moral considerations, but then go on to act from my perspective. Any justification to make that perspectival switch would essentially be a yet another normative prescription. But, again, that's circular.

So what went wrong? When describing utilitarianism, it seemed pretty reasonable sans the potential gripe with its hedonic nature. How to we rectify that? The issue is that the utilitarian argument presented above subtly baited you into (maybe) agreeing with it.

Jeremy Bentham, the founder of utilitarianism, embalmed at the UCL campus at his request. Who thinks this is a good idea? Can we really trust this guy's judgement?

Specifically, when you agree to the statement "wellbeing is good and should be optimized for" you're really agreeing to the more precise statement, "from my vantage point, the wellbeing of myself and of those close to me is good, and should be optimized for." But utilitarians sneakily make one think they're agreeing to the statement "everybody's wellbeing is good and should be optimized for." See what I mean? Sheer silliness.

Understanding the sleight of hand utilitarians pull is an important phenomenon to internalize, but we actually come away with much more. This essay wasn't just a negative argument, there was a positive argument in there as well. Specifically, the more reasonable moral theory is not to optimize global utility, but to optimize local utility. This doesn't have to be hedonic either. So when thinking about the question, "what should I do?" the answer is a straight forward, albeit still abstract, "that which maximizes your local utility function." Now that's something all my fellow nerds out there can safely get behind.

Previous
Previous

Moral bedrock

Next
Next

What is morality? What is rationality?