Christian Gonzalez

View Original

How not to think about morality

In the last chapter we established that the First Question is ‘what should I do?’ and that in order to answer it we first needed to understand the central word ‘should’. Doing that will take some groundwork to first be laid. How are we to determine which understanding of ‘should’ we should operate with? What’s our methodology?

In order to be able to build out a newly refactored web of beliefs, it’s important to iron out the question of which methodology I’ll be using when attempting to understand certain key concepts. And in order to motivate the methodology I’ll be using for answering normative or moral questions, it’s important to take a brief detour to understand why the current dominant style of thinking about normativity, typically by moral thinkers, fails.

The 2,500 year old mistake in normative methodology

A fundamental issue with normative philosophies has to do with how we should understand certain key concepts. It’s common practice for thinkers to try to figure out the “right” definitions and principles for normative language that covers all or nearly all possible acceptable uses of words like “good” and “right.” This is a problem because evaluative terms (like good, right, should, desire, preference), are used in so many disparate contexts that it’s impossible define those key concepts in such a way so as to encompass all modern uses of those words.

Here’s how I like to visualize the problem (for language about actions and state of affairs, like morality and decision theory): we can ask ourselves at least two questions about sentences that have some normative judgement or prescription:

  1. Is the statement about an individual, a collective, or somewhere in between?

  2. Does the statement appeal to precise rules or fuzzy intuitions?

So each statement can be placed along at least two axes in ‘normative judgement space’. (There are certainly more than two features of a normative statement worth considering, but more precision isn’t needed for the point I’m trying to make.)

The axes are:

<Intrapersonal—————Interpersonal>

<Rules—————Intuition>

These two axes are independent of one another, so we can, in some sense, represent all possible act-related normative statements as points lying on this two dimensional grid, this ‘normative judgement space’. Of course, we would want to distinguish nonsensical normative statements from actually meaningful normative statements. Using the criteria that the meaning of a concept is rooted in its common use, we could (theoretically) ask people if each of these statements makes sense, more or less. The people we ask could be any linguistic community: native English speakers, people from the same country, a certain social class, etc. The choice isn’t crucial to the underlying point being made, what’s relevant is that we’re picking out the meaning of these special concepts by figuring out how people would use them.

Let’s color the points (which represent statements) as blue if most (or a sufficiently large percentage of) people would say, “yeah, that statement makes sense,” and red otherwise. I imagine these points would form loose clusters.

My theory is that what normative thinkers in general, and moral philosophers in particular, have been doing is trying to group together blue dots while excluding as many red ones as possible. A clustering algorithm of sorts. It appears that, applying this model specifically to morality as an example, the major moral theories (e.g. rule/act-consequentialism, virtue ethics, care ethics, contractualism, ethical egoism, deontology, etc.) are simply formalizations of these clusters of moral language: the equation of the curve is analogously the formalization of the moral theory.

Below is a visualization of what I’m envisioning:

Normative judgement space, restricted to moral statements

This is an issue not only because we use moral and normative concepts in different ways even in the same context (to say nothing of cultural and educational differences), but their meaning changes over time. The result is philosophers in different times trying to play catch up with an ever shifting locus of evolving concepts. They’ve essentially been chasing their tails for thousands of years.

Rather than trying to lasso together as many kosher, in this plot’s case, moral statements as possible under the banner of a formal moral theory. A better goal, I propose, is to think of moral philosophy in particular, and normativity (e.g. morality, decision theory) in general, as an axiomatic system akin to a new mathematical subfield.

That’s not to say that we humans actually think axiomatically when we morally reason: of course we don’t. We operate on heuristics, intuition and guesses nearly the whole time. What I’m proposing is to aim at building an axiomatic system so we at least know the theoretical underpinning of how we should act, and why. If along the way we find that this system provides a useful way of thinking and operating in the world, then that’s as good a reason as any to use it.

The key part of an axiomatic system is defining concepts and rules of inference, in our case normative concepts (like good, right, should, desire, preference). Note, we’re defining these concepts. That means that we don’t have to find the definition which captures the actually used meaning of a concept in everyday speech. We can simply state what the meaning of a key concept is. Why my definitions over anyone else’s? My claim is once looking through the lens I’m outlining, you will find clarity and usefulness in adopting this mental framework. This grounds out in a sort of pragmatism.

Of course we’ll try to define these words in such a way so that they’re similar to the “true” meaning of the concept as used in common speech. But losing some familiarity in the process of giving more precise definitions is a worthy sacrifice to make if we want to clear up confused normative thinking once and for all (or at least make some progress towards that ideal). The goal is to have useful definitions for thinking about morality. This is the heart of conceptual engineering.

Now I won’t begin by throwing out a list of moral axioms and definitions and proceeding to derive action-guiding normative principles from them. This isn’t Euclid’s Geometry, nor should it be. When we’re done that will, in theory, be possible. But that only works on a certain level of abstraction and generality. Trying to deduce more particular actions from an axiomatic normative theory would be like trying to figure out a good retirement portfolio starting with quantum field theory. That’s just a poor strategy.

So in the next post, we’ll lay out the messy groundwork for thinking clearly about morality, so that we can start to answer the First Question. This involves defining the key normative concepts like good, right, should, desire, and preference. From that springboard, the answer to the First Question will naturally fall out.