Against Irreducible Normativity
By Lukas_Gloor @ 2020-06-09T14:38 (+48)
Last updated: 20/1/2022
This is the third post in my sequence on moral anti-realism; it works well as a standalone piece. (See 1 and 2 for my previous posts.)
Summary
- After briefly explaining the concept of irreducible normativity, I delve into a three-tiered argument against the moral realism versions based on it.
- First, I summarize evolutionary debunking arguments that show that our intuitions about normative bedrock concepts (especially morality) cannot be trusted. Those arguments aim to establish that regardless of whether there are irreducible normative truths, our intuitions about them evolved to track something else. This is problematic both because it means that the search for moral progress is likely doomed and because it calls into question the reasons for taking irreducible normativity seriously in the first place.
- Secondly, I try to change the perception of normative anti-realism as a self-defeating framework. Through careful consideration of the sources of meaning in our lives, I argue that those sources are compatible with normative anti-realism (at least for most of us). I provide a sketch of what it could look like for anti-realists to reason about ethics, pointing out some ways in which self-determined moral goals can feel more meaningful than externally-imposed ones.
- Thirdly, I note that the way irreducible normativity is commonly motivated stands in tension with how words obtain their meaning. I then delve into various ways how one could try to make irreducible normativity work as a concept. Some options are too disconnected from what we want to do, and others are too close to it (in the sense that, as far as practical purposes are concerned, they overlap with how anti-realists would also approach normativity). I note that the most attractive way to think about irreducible normativity closely resembles normative naturalism. Finally, I conclude that if the arguments in this post are sound, there's a tension between moral realism worthy of the name and the notion of open-ended normative uncertainty.
What is irreducible normativity?
In this article, I will discuss examples from morality as an instance of irreducible normativity. As I will explain in the very last section, I think analogous arguments also apply against other forms of normative realism.[1]
Irreducible normativity is the idea that there are facts about what we have reason to do (or believe) that go beyond what’s in line with our subjective evaluation criteria (e.g., our desires, fundamental intuitions, ideals, etc.). These facts would capture that some things qualify as “right” or “good” according to normative standards that apply independently of what we already endorse. Derek Parfit expressed it as follows (Parfit, 2011a):
Like some other fundamental concepts, such as those involved in our thoughts about time, consciousness, and possibility, the concept of a reason is indefinable in the sense that it cannot be helpfully explained merely by using words. We must explain such concepts in a different way, by getting people to think thoughts that use these concepts. One example is the thought that we always have a reason to want to avoid being in agony.
Parfit holds that “reason” is a bedrock concept (Chalmers, 2011)—a concept that is “irreducible” in the sense that we cannot helpfully define it with other words. We can contrast this with the view that “reason” is a “reducible[2] concept. On the reducible interpretation, “having a reason to do X” might mean, for example, something like:
X is a principle we’d endorse if we had all the relevant information and ample time to reflect on it.
Anti-realists about normativity, who interpret all talk about reasons in the reducible way, can point to factors like the following for why “we always have a reason to want to avoid being in agony” is a compelling principle:
- indifference towards agony is (almost) non-existent among humans,
- it’s motivationally incongruent[3]
- empathy toward the experience of moments in tremendous distress compels us to advocate in favor of avoiding agony
Of course, those factors are all subjective. They may not make the principle compelling to absolutely everyone. Certainly, we can imagine alien minds who think differently about agony avoidance. Normative anti-realists have to bite this bullet.
By contrast, Parfit’s stance is that such reductionist accounts are not enough. According to Parfit, “having a reason to do something” captures something more than just the above psychological factors. If “reason” could be defined in terms of psychological factors, the same reasons may not apply to everyone—and some people will have bizarre “reasons.” After all, some individuals could then claim that they have unusual psychologies, senses of morality, or life goals. Irreducible normativity has an aura of particular significance precisely because there are no specifiable requirements needed to buy into it.
On the other hand, that’s also why the concept appears suspect. The fact that we cannot explain what we mean by it should give us pause—perhaps our intuitions are failing to point out something meaningful?
The case against irreducible normativity
The disagreement about irreducible normativity is neither empirical nor about logic. As I argued in my previous post, I think it’s a disagreement about how to do philosophy, specifically about whether or not to incorporate normative bedrock concepts into one’s philosophical repertoire. In the upcoming sections, I will argue that irreducible normativity is philosophically costly (section 1), that we can reason satisfactorily without it (section 2), and that how it is commonly motivated and justified begs the question on whether the concept is even meaningful (section 3). Overall, I will conclude that—except for a specific interpretation that I will address in future posts—irreducible normativity is not worth wanting.
1. Evolutionary debunking arguments
Babies are not objectively cute, hourglass or V-shaped human bodies are not objectively sexy, and honey is not objectively sweet. Those things are cute, sexy, or sweet to us (or “many of us”). Just like our intuitions about what’s cute, sexy, or sweet evolved because viewing the world in those ways proved evolutionarily beneficial in the ancestral environment, our intuitions about morality are also a biological adaptation. The intuition that morality is objective (“speaker-independent”) is a part of this adaptation. The philosopher Michael Ruse (2010) put it as follows:
To be blunt, my Darwinism says that substantive morality is a kind of illusion, put in place by our genes, in order to make us good social cooperators. I would add that the reason why the illusion is such a successful adaptation is that not only do we believe in substantive morality, but we also believe that substantive morality does have an objective foundation. An important part of the phenomenological experience of substantive ethics is not just that we feel that we ought to do the right and proper thing, but that we feel that we ought to do the right and the proper thing because it truly is the right and proper thing.
I don’t fully endorse Ruse’s terminology. As I argued in my previous post, moral anti-realism doesn’t mean that morality isn’t “substantive.” However, I think Ruse is correct about the general point: Our intuitions in favor of moral realism evolved for reasons that have no connection to whether the position is true. That should make us skeptical of those intuitions.
The same style of debunking argument can also be leveled against the content of our moral intuitions, not just whether they reflect an irreducible, speaker-independent reality. In her paper “A Darwinian dilemma for realist theories of value,” Sharon Street (2006) laid out a compelling argument, giving realists about irreducible normativity two hard-to-accept options. The first option is to assume that there’s no connection between our moral intuitions (“evaluative attitudes”) and irreducibly normative moral facts. Accordingly, if our moral intuitions were correct, it would be by sheer luck. The second option is to assume that there is a connection. Street argues that this second option would look implausible on scientific grounds because natural selection favored intuitions that help with survival and reproductive success, not ones that track irreducible normative truths.[4]
For a more rigorous and comprehensive version of the argument, I recommend Street’s paper or Joe Carlsmith's Lesswrong post The ignorance of normative realism bot.
Normative realists might argue that our ability to track irreducible normative truths could have been evolutionarily adaptive since this response would circumvent Street’s dilemma. However, nothing about the way irreducible normative truths are usually motivated suggests that they work this way. If it were evolutionarily adaptive to grasp normative truths, these truths would have to directly interact with our cognitive machinery in a way we—presumably—could describe in physical terms. If we can describe the effect those normative truths have on our cognitive machinery, we’d have found a way to describe them in non-normative terminology. On most accounts, this would no longer make them irreducible truths, but naturalist ones.
It’s important to note here that evolutionary debunking arguments only apply against versions of moral realism based on irreducible normativity (“moral non-naturalism”). There are also so-called naturalist versions of moral realism, according to which we can rephrase moral terms like “goodness” or “right” with non-moral terminology. For instance, a naturalist moral realist might say that goodness consists of preference satisfaction or positively valenced states of consciousness. We can explain perfectly well (especially in the personal case) why it might have been evolutionarily adaptive to value those things. Accordingly, naturalist accounts of moral realism are not threatened by the evolutionary debunking arguments.[5] (As I will discuss in the section “Summary, conclusion and open questions,” the distinction between naturalism and non-naturalism can be fuzzy, which makes it difficult to completely shut the door on irreducible normativity.)
Overall, I don’t consider evolutionary debunking arguments to be decisive. Some proponents of irreducible normativity also have interesting replies to them.[6] However, I consider the debunking arguments forceful enough to show that there’s something very strange with the concept of irreducible normativity, and that in order to potentially adopt the concept into one’s philosophical repertoire, one needs to do a great deal of explaining.
2. Normative anti-realism is existentially satisfying (at least it can be)
Many of us want to believe what’s right, pursue our goals rationally, and act morally towards others. Proponents of irreducible normativity may think that if we give up on the idea that right, rational and moral are bedrock concepts, we are accepting that all standards are arbitrary.
In my previous post, I tried to challenge this perception. I argued that giving up on irreducible normativity is perfectly compatible with retaining standards about how to reason, act, or treat others. Anti-realists don’t deny that there is structure to the space of normative considerations—they disagree with the realists only on whether there’s a single true interpretation.
Realists about normativity may interject at this point, saying that it’s bizarre to think that our most basic beliefs are interpretations only, that there’s no fact of the matter about whether and how they are justified.
I want to challenge this sentiment. I think it only seems bizarre because the standards we attach to the expression “no fact of the matter” are unreasonably strong. It’s correct that anti-realism means that none of our beliefs are justified in the realist sense of justification. The same goes for our belief in normative anti-realism itself. According to the realist sense of justification, anti-realism is indeed self-defeating.[7]
However, the entire discussion is about whether the realist way of justification makes any sense in the first place—it would beg the question to postulate that it does. Giving up on realism doesn’t directly change anything about the principles we use to ground our reasoning.[8] For instance, as anti-realists, we are likely to continue to endorse the following principles (among others):
- Adopt philosophical views that are in reflective equilibrium with the rest of our beliefs.
- Aspire to hold opinions that follow logically from premises we endorse.
- Aspire to believe something if we observed it directly or have it from a trustworthy source.
- Adopt normative principles that strike us as self-evident, or ones that are most in line with our most fundamental normative intuitions (e.g., the preference that we care about others and prefer there to be less involuntary suffering in the world).
Going by those standards of reflectively endorsed belief-formation, anti-realism is not self-defeating at all. Those principles themselves may lack further justification, but that doesn’t have to concern us: they are evaluation criteria. They don’t need further justification because they are the axioms we ground our reasoning on. If these principles strike us as self-evident, that’s all the justification we can get.
Normative realists might say they only want to believe things that are really justified, justified in the sense of normative realism. However, humans already believed things long before we formed an understanding of the difference between realism and anti-realism. Our folk concept about what it means for a belief to be justified is neither (explicitly) realist, nor (explicitly) anti-realist. (See also the discussion in this post.) I concede that many people might say that whether or not a belief is justified seems like a matter of irreducible normativity. Still, that doesn’t settle the question. Our intuitions in favor of realism could be misguided. There's nothing inconsistent with conceptualizing our realist intuitions as wrong.[9]
I don’t mean to deny that people can commit themselves to the stance that their everyday motivations for believing things are tied inseparably to the truth or falsity of the controversial philosophical theory “realism about irreducible normativity.” I consider this to be a special case rather than the norm, and I will address it in upcoming posts.[10]
For most people, at least, I claim that they already believe all the ingredients needed to reason satisfactorily about normativity, from an anti-realist perspective. I will now sketch what this look like:
Many readers of this post will be familiar with Peter Singer’s famous moral arguments, such as the drowning child argument (Singer, 1972) and the argument from species overlap (Singer, 1975). Singer may have presented these arguments in moral language (e.g., he talked about “moral obligations” to help those in need), but these arguments work independently of any particular metaethical position (Singer, 1973). The drowning child argument appeals to the sentiment that we may want to be the sort of person who saves a child from drowning and connects that sentiment to also wanting to help children overseas through donations. The argument from species overlap consists of appeals to two sentiments: that we don’t want to be the sort of person who mistreats any sentient members of our species, and that we don’t want to adhere to decision principles of discrimination, i.e., treating someone differently solely because of their group membership. The argument then illustrates that for any dimension we may consider to be of significance (e.g., sentience, intelligence, ability to speak, etc.), there is considerable overlap among species.
Singer’s moral arguments are not outliers. When we reframe normative-ethical arguments from an anti-realist perspective, their motivational force doesn’t change. But there’s something about the “place” that normative considerations take up in a person’s thought process that changes.
In his essay “A Critique of Utilitarianism,” Bernard Williams (Williams, 1973) argued that there is something wrong with the utilitarian thought process. If someone believes utilitarianism is the right moral theory, there’s an important sense in which there is no room for the person to choose their life projects. According to utilitarianism, what people ought to spend their time on depends not on what they care about but also on how they can use their abilities to do the most good. What people most want to do only factors into the equation in the form of motivational constraints, constraints about which self-concepts or ambitious career paths would be long-term sustainable. Williams argues that this utilitarian thought process alienates people from their actions since it makes it no longer the case that actions flow from the projects and attitudes with which these people most strongly identify.
Williams framed this as an argument against utilitarianism. However, I find that what he was objecting to is primarily the existence of external moral obligations (perhaps in conjunction with consequentialism).[11] Williams's critique misses the mark for people who think of utilitarianism (or consequentialism more generally) as a personal philosophy. Under anti-realism, the arguments for consequentialist morality don’t disappear—they take on a different, less prominent place in people’s philosophical framework. Instead of “utilitarianism as the One Compelling Axiology,” we consider it as “utilitarianism as a personal, morally-inspired life goal.”
Other moral frameworks—such as contractualism or virtue ethics—remain relevant under anti-realism in the same way. Virtue ethics describes the consideration space about what sort of person we want to be, and contractualism the consideration space about how we want to think about the relation between us pursuing our life projects and other people pursuing theirs.
As I will hopefully be able to convey also in future posts of this sequence, and as Luke Muehlhauser has sketched it in his post on Pluralistic Moral Reductionism, the resulting picture of ethics is intellectually satisfying. It conforms to a Wittgensteinian view of philosophy that doesn’t leave us with notions we don’t understand. As it’s summarized in the Stanford Encyclopedia of Philosophy:
[...] Wittgenstein holds [...] that philosophers do not—or should not—supply a theory, neither do they provide explanations. “Philosophy just puts everything before us, and neither explains nor deduces anything. Since everything lies open to view there is nothing to explain (PI 126).
All normative-ethical perspectives take on their proper places at the level where the considerations most resonate with us. Anti-realism doesn’t make ethics less serious; nothing about it is watered down. We don’t need external obligations to feel the weight of opportunity costs on our shoulders. Regardless of one’s metaethical leanings, after encountering Singer’s arguments, we can’t help but view the world with different eyes. Baby cows begin to look like differently-shaped human toddlers. The price tags for luxury watches become symbols of the foregone opportunity to save and improve the lives of people in extreme poverty.
Moral realism or not, our choices remain the same. What’s conceptualized differently, under anti-realism, is that we no longer frame them about what’s (externally) moral, but about what sort of person we want to be and what we want to live for. Shouldering the responsibilities of consequentialism—if we decide to go down that road—won’t feel like an attack against our integrity, since we’d be choosing it freely.[12]
Other people’s life choices may differ from ours. In some instances, we might be able to point out that they’re committing an error that they might recognize by their criteria. In that case, normative discussions can remain fruitful. Unfortunately, this won’t work in all instances. There will be cases where no matter how outrageous we find someone’s choices, we cannot say that they are committing an error of reasoning.
While this concession is undoubtedly frustrating, proclaiming others to be objectively wrong rarely accomplished anything anyway. It’s not as though moral disagreements—or disagreements in people’s life choices—would go away if we adopted moral realism.
If it turned out that belief in moral anti-realism made people less moral or inclined to follow the principles of effective altruism, this would pose a dilemma. Of course, whether a position is true is separate from whether it’s beneficial to promote. Moreover, in actuality, I don’t expect anti-realism to lead to decreased moral motivation—not if it’s presented thoughtfully, at least. If the world’s leading philosophers got together tomorrow and announced that moral realism is wrong, what would matter most is what they write afterward.[13] If they write that this means all morality is nonsense, I could imagine that it would have some adverse consequences. By contrast, if they explained how this generally leaves moral arguments intact and gives us the autonomy to choose what sort of people we want to be and what we want to live for, probably not that much would change. (And the things that would change may well be for the better.)[14]
3. Irreducible normativity fails as a concept
Most things don’t change if we abandon normative realism, but some do. In particular, irreducible normativity has no place under anti-realism and no replacement. In this section, I argue that this is okay: irreducible normativity is not worth wanting because it cannot live up to the requirements we associate with it.
I understand the sentiment behind irreducible normativity, but I don't understand it as a concept. Irreducible normativity looks like a mere intuition to me. For it to be a concept, I need to know something about what it attaches to, how it obtains reference. In the upcoming sections, I will sketch some ways for what this could look like, and why they don’t strike me as compelling.
Consider, first, the option that we don’t know anything at all about how reference is obtained. To illustrate this I’m introducing the concept of super-reasons:
Super-reasons
Super-reasons are a made-up concept. They work as follows:
Consider the intuition that something ought to be done. Instead of treating this intuition as subjective “color coding” that our minds attach to (the thought of) specific actions, we stipulate that our intuitions are tracking a real, mind-independent property (anti-realists would point out this is an example of reification). This way, we get the concept of super-reasons: Super-reasons are speaker-independent properties that apply to some actions, making them things we ought to do. By stipulation, we know nothing more about super-reasons, i.e., we have no idea which exact actions are backed by super-reasons. For all we know, we may have a super-reason to clap our hands together three times every Friday.
Should we take super-reasons seriously?
Super-reasons are a weird concept. There’s a sense in which they are “hypothetical only.” Without knowing the conditions under which super-reasons apply, we could envision them as being attached to everything or nothing.
There’s an implied sense in which we are supposed to care about super-reasons. The intuition we attach to super-reasons, that something ought to be done, tends to elicit a desire to act accordingly. In this sense, there is, by definition, something about super-reasons that makes them feel relevant to what we want to do.
However, the way we defined super-reasons made clear that they could be attached to all kinds of actions, including arbitrary ones like clapping one’s hands together, or abhorrent ones such as eating babies. Because super-reasons may not have anything to do with the content our original normative intuitions were based on, we’d be wrong to think of them as related to the familiar normative categories. For instance, our folk concept “morality” has something to do with being kind, but whether or not super-reasons also include being nice to others is left wide open. Whatever super-reasons are about, based on how we defined them, we wouldn’t know.
The concept of super-reasons repurposes a label we’ve initially come to associate with object-level normative principles that resonated with us. These principles include wanting to have beliefs that can generate accurate predictions, or wanting to promote human flourishing. However, instead of staying attached to this familiar content, we removed only the label and transferred it to other things, to principles that may not resonate with us at all. Since we—presumably—care about the content instead of the label, we shouldn’t take super-reasons into account in our decision making. (Of course, based on how we defined super-reasons, we also wouldn’t know how to do that.)[15]
Is irreducible normativity about super-reasons?
Irreducible normativity is not about super-reasons. At least, I sense that most contemporary proponents of the concept would object to the comparison. They may object that saying, “Imagine that we have a reason to clap into our hands three times every Friday” is like saying, “Imagine that, to be a good friend, you need to like the color purple.” Just because it’s silly to imagine that liking the color purple is an attribute of a good friend, doesn’t mean that “an attribute of a good friend” is a meaningless concept. Just because it’s silly to imagine that clapping one’s hands together three times every Friday is backed by an irreducible normative reason doesn’t mean that no actions are.
That said, the comparison to super-reasons doesn’t seem entirely unfounded. In his writings on metaethics, Parfit quotes criticism by the philosopher Stephen Darwall (Parfit, 2011b, p. 294):
[...] the resulting picture of rational motivation is an alien and unsatisfying one. It fails to make the desire to act for reasons intelligible as one that is central to us and not simply a superadded fascination with a non-natural metaphysical category.
Darwall’s wording suggests that he might interpret irreducible normativity in terms of super-reasons. For instance, what I called “repurposing a label” seems similar to what Darwall calls a “superadded fascination with a non-natural metaphysical category.”
Parfit replied to Darwall as follows:
If Darwall had my concept of a reason, he would not make such claims. When I believe that some fact gives me a decisive reason to do something, it is not unintelligible how I might want to act for this reason.
Question-begging examples
So how is it that irreducible normativity is connected to us wanting to act by it? Unless we have some concrete idea about which principles are backed up by irreducible normativity, the concept is equivalent to super-reasons.
Looking at the examples Parfit and other normative realists use to motivate irreducible normativity, we see something interesting. Unlike the example I used to motivate super-reasons (“you may have a super-reason to clap your hands together three times every Friday”), philosophers tend to motivate irreducible normativity with principles that are as uncontroversial as possible. In Moral Realism: A Defense, Russ Shafer-Landau (2003, pos.178) writes that “At least some moral principles are knowable via self-evidence.” Parfit used the self-evident example “we always have a reason to want to avoid being in agony.” Similarly, in Reasons and Persons (1984), Parfit argued that we have a reason to avoid “Future Tuesday Indifference”—a hypothetical silly disposition where a person generally cares about what happens to their future self except for things that happen on Tuesdays.
Explaining irreducible normativity with self-evident examples makes it apparent how the concept is connected to what we want to do. However, arguably it’s now too connected to that. Self-evident principles are principles that, by definition, (almost) everyone recognizes. (This may not mean that everyone will be motivated to act on them; for instance, amoral psychopaths may not have any intrinsic motivation to act on self-evident moral principles.) If irreducible normativity is meant to be relevantly different from (my conception of) normative anti-realism, then we have to show that irreducible normativity sometimes reaches beyond self-evident principles. Evidently, self-evident examples are fundamentally unsuited for illustrating how that could work.
Readers may object as follows. If we concede that we have an irreducible reason not to stick our hand into a blender without some compensatory gain (a “self-evident principle”), doesn't that open up the possibility that we also have such a reason to, say, not get an abortion (a “potentially controversial principle”)?
My answer is, “not really.” At least, it depends on what we mean by “irreducible reason.” If—as the example seems to indicate—we think that irreducible reasons can also attach themselves to principles that aren’t self-evident, the question becomes "Which of the not-self-evident principles does it attach itself to?" As long as we don’t know how to think about this, the concept remains under-defined. I may have certain ideas or connotations in my mind that I could use to figure out what principles (other than the self-evident ones) are also backed by irreducible normativity. However, because other people will have in mind subtly different ideas and connotations, those ideas and connotations cannot be the foundation of a speaker-independent concept.
The challenge for normative realists is to explain how irreducible reasons can go beyond self-evident principles and remain well-defined and speaker-independent at the same time.
Is (our knowledge of) irreducible normativity confined to self-evident principles?
One way to address this challenge is by restricting the practical scope of irreducible normativity. One could concede that insofar as irreducible normativity goes beyond self-evident principles, we are forever cognitively closed off to it.
On this view, irreducible reasons are technically different from self-evident principles, but we hope for them to have the same extension. Irreducible reasons would supervene on the self-evident principles, insofar as we are correctly predisposed to find the right principles self-evident. Accordingly, the way in which irreducible reasons differ from self-evident principles, on this view, is only that we allow for the possibility that we could be incorrectly predisposed to ascertain normative truths.[16]
Even though this view is undoubtedly moral realist in spirit, I’d say it makes crucial concessions to anti-realism. (Importantly, I don’t mean the type of “anti-realism” that people associate with the word “nihilism” and the sentiment “anything goes.” I mean the more fruitful version of anti-realism that I characterized in Section 2.)
Compared to the anti-realism I have outlined, a kind of moral realism according to which the only accessible moral truths are self-evident principles is pretty much redundant—at least in practice.[17] I’m more interested in normative realism versions that would actually change the way I think about normativity, i.e., how I reason about ethics, epistemology, or metaphilosophy. If the only implication of normative realism is “all else equal, murder is truly bad” or “all else equal, putting your hand into a blender is truly wrong,” my newfound understanding of that won’t change how I go about pursuing my priorities.
As an anti-realist, I don’t yet call self-evident principles “(probable) normative truths.” However, if I were to start calling them this, it would only make a semantic difference.
Do any prominent moral realists hold the view I just described? I’m not confident, but when I read Russ Shafer-Landau’s account of normativity, I found myself wondering whether my perceived disagreements with his position would go away if I translated my thinking into his philosophical framework.[18] Of course, with the general difficulty of understanding people who think in terms of radically different philosophical frameworks,[19] it’s hard to tell. It might well be that Shafer-Landau would allow no comparison between his view and my anti-realism.
As a side note (even though it doesn’t relate to irreducible normativity), I would “object” in a very similar way to the naturalist version of moral realism Sam Harris argued for in his book The Moral Landscape (Harris, 2010).[20]
Is there a speaker-independent normative reality?
I have argued that, if we assume that we don’t know anything about what irreducible reasons are attached to, the concept becomes vacuous—like super-reasons. I have also argued that if we limit our ability to comprehend irreducible normativity to principles that appear self-evident to us, the resulting concept doesn’t do enough to clearly go beyond moral anti-realism.
In this section, I want to introduce one last option to make irreducible normativity work as a concept.[21] I commonly associate[22] this option with moral naturalism, the view that normative facts are reducible to physical facts. However, as the SEP entry on moral non-naturalism notes, the distinction between naturalism and non-naturalism is complicated:
There may be as much philosophical controversy about how to distinguish naturalism from non-naturalism as there is about which view is correct. [...] Perhaps the most vexing problem for any general characterization of non-naturalism is the bewildering array of ways in which the distinction between natural and non-natural properties has been drawn.
Since I don’t understand this distinction well enough, I intend my arguments from here onwards to apply to both reducible (“naturalist”) and irreducible (“non-naturalist”) versions of speaker-independent normativity.
The last option to make normativity work as a concept is this:
To understand normativity, and to evaluate whether the concept has meaning, we need to look at the space of all possible considerations at the object-level. Normativity is meaningful if there is a single set of principles that sticks out in a sense relevant to us.
On this interpretation, when realist philosophers point toward examples such as “we always have reason to want to avoid being in agony,” they aren’t (just) pointing at the intuition that “some things ought to be done.” Neither are they interested only in what appears to us as self-evident. Instead, they are using a maximally uncontroversial example to get us looking in the right direction. By pointing out salient features of the normative reality, the intention (or hope?) is that with the help of these pointers, we can come to understand what makes some principles normative principles. Because (so goes the assumption) reality “allows for only one interpretation,” the hope is that we’ll learn to extrapolate from the most uncontroversial examples to the more difficult ones, until the entire normative reality is mapped out unambiguously.
If we think about how words obtain their meaning, it should be apparent that in order to defend this type of normative realism, one has to commit to a specific normative-ethical theory. If the claim is that normative reality sticks out at us like Mount Fuji on a clear summer day, we need to be able to describe enough of its primary features to be sure that what we’re seeing really is a mountain. If all we are seeing is some rocks (“self-evident principles”) floating in the clouds, it would be premature to assume that they must somehow be connected and form a full mountain.
Figure 1. Mount Fuji: a metaphor for normative reality. (Public domain)
The next two sections are dedicated to further backing up the claim that to defend this last and—in my view—most attractive version of normative realism, one has to commit to a specific normative theory.
Essences are always subjective
When we see a bunch of similar-looking, similarly-contextualized phenomena, our mind forms a category based on the “essence” (or “archetype”) of the concept in question. For instance, after seeing a bunch of dogs, children form the concept “dog.” There’s a sense in which that concept already came to include chihuahuas even before the children ever encountered a dog that small.
Based on our mind’s readiness to form essences, it’s tempting to assume that after having familiarized ourselves with a bunch of examples of (what we think of as) “speaker-independent normativity,” we know enough to apply this concept to never-before-encountered instances. However, this process (“distilling essences based on central examples”) cannot justify normative realism because it only ever generates subjective concepts.
Wittgenstein pointed out that for terms such as “game” or “language,” we may not be able to give a simple[23] verbal definition, but we can teach them through examples. He coined the term family resemblance (Wittgenstein, 1953) to explain why this works: Even though no characteristics may be present in all the examples, overlapping similarities show up in different combinations—just like members of a family have overlapping similarities in their appearances.
Wittgenstein’s entire point behind family resemblance was that the meaning of a concept is nothing beyond the examples that go into it. Sure, one can extrapolate from old examples to new, never-before-encountered ones. However, whether a never-before-encountered example matches the category in question will always depend on the initial learning process. Concepts built up through family resemblance are not speaker-independent. Whether we come to classify an unusual, never-before-encountered activity as a “game” depends on how it relates to the examples of games we are familiar with, and (presumably) on various weights in our brain’s concept-formation modules. There’s no further fact that determines what’s a game.
If two people learned their “game” concepts through examples that differed along some relevant dimensions, they wouldn’t end up with the same concept. (E.g., consider one child being taught implicitly that games always have a single winner whereas the other child is taught that all the players can win or lose together in many games.)
Crisp reference only works in combination with a compelling theory
There are instances where just a handful of examples or carefully selected “pointers” can convey all the meaning needed for someone to understand a far-reaching and well-specified concept. I will give two cases where this seems to work (at least superficially) to point out how—absent a compelling object-level theory—we cannot say the same about “normativity.”
Example 1 (“H2O”)
Imagine a person who forgot everything they ever knew except for a rudimentary understanding of chemistry. If we gave this person the concept “H2O” (perhaps explained with its chemical formula), this short concept contains enough information to refer successfully to ice cubes, snowflakes, around 60% of the human body, water vapor in the air, and a chemical byproduct of combustion. Simultaneously, “H2O” successfully refers not to similar-seeming things such as dry ice, the “seas” of Titan, or the magnesium compounds in a glass of spring water. Once the person understands “H2O,” they have the requirements needed to sort a wide range of real-world phenomena into “water” and “not water.”
Example 2 (“mathematics”)
Consider an extremely intelligent person who has never encountered mathematics, but is familiar with the basic notion of a formal system. With just a few pointers—syntax and a few axioms—we could give this person the requirements needed to eventually understand all of mathematics (at least for specific axiomatizations of mathematics), opening up an entire realm of well-specified and often useful abstract relations.
These thought experiments illustrate that under the right circumstances, it’s possible for just a few carefully selected examples to successfully pinpoint fruitful and well-specified concepts in their entirety.
There are two crucial differences to the situation we find ourselves in concerning normativity:
- We don’t have the philosophical equivalent of a background understanding of chemistry or formal systems. Without understanding the basic rules of a domain, our words cannot refer to (aspects of) reality crisply. Normative disagreements are particularly challenging to address precisely because they don’t just concern object-level answers, but also about the methods needed to derive them. Metaphilosophy, the question of how to do philosophy, is itself subject to the disagreement between realists and anti-realists.
- With the concept of normativity, we can’t know that reference will work successfully. In the above examples, which we looked at from the outside, we were in a position to tell that the chemical formula for water or the syntax and axioms for mathematics would open up novel grounds. However, the hypothetical people in the above thought experiments could not have known this with their limited perspectives. They could not have told the difference if they had instead received randomly drawn chemical formulas or random syntax and axioms.
The above analysis illustrates that to be justified in thinking that a concept successfully refers to a well-specified domain of things outside our head, we need to understand the concept well enough to know its way of referring. If we don’t know how it refers, it doesn’t do so at all.[24] Words can’t have meaning independently of how we use them. We can’t know if there’s a mountain if we don’t already have the means to locate and describe it.
Summary, conclusion, and open questions
To maintain that normativity—reducible or not—is knowable at least in theory, and to separate it from merely subjective reasons, we have to be able to make direct claims about the structure of normative reality, explaining how the concept unambiguously targets salient features in the space of possible considerations. It is only in this way that the ambitious concept of normativity could attain successful reference. As I have shown in previous sections, absent such an account, we are dealing with a concept that is under-defined, meaningless, or forever unknowable.[25]
Normative realism is the view that questions about normativity have a speaker-independent solution. It is commonly argued normative realism enables us to be morally uncertain (whereas this is less clear for anti-realism).[26] I disagree. In this piece, I have put forward the argument that to be a normative realist worthy of the term, one has to provide that solution already. Or, at least, one has to provide the requirements needed to unambiguously derive it—just like the syntax and axiomatization of mathematics specifies what constitutes the solutions to mathematical questions.
Curiously enough, I believe that if the arguments in this post are sound, moral realism worthy of the name is incompatible with open-ended notions of moral uncertainty.[27]
Some moral realist philosophers have indeed advanced confidently-endorsed proposals for solutions to normative ethics. I’d say this includes Derek Parfit.[28] Besides Parfit, I’m also thinking about accounts that aim to ground moral realism in the “intrinsic value” of some conscious experiences, and the view that we can quantify it and turn morality into a calculus (e.g., de Lazari-Radek & Singer, 2014).
I titled this post “Against Irreducible Normativity.” However, I believe that I have not yet refuted all versions of irreducible normativity. Despite the similarity Parfit’s ethical views share with moral naturalism, Parfit was a proponent of irreducible normativity. Judging by his “climbing the same mountain” analogy, it seems plausible to me that his account of moral realism escapes the main force of my criticism thus far. In addition, I’m generally unsure whether I understand the distinction between naturalism and non-naturalism well enough to claim that there are no other conceptions of irreducible normativity that can withstand my arguments.
For these reasons, I have to end this article with a somewhat unsatisfactory conclusion. If my arguments here are sound, then either irreducible normativity is not worth wanting (i.e., it’s either vacuous, trivial, or meaningless), or it stands and falls together with the case for moral naturalism.[29] I will argue against moral naturalism in upcoming posts.
Appendix: Morality versus other types of normativity
Moral realists (e.g., Shafer-Landau, 2003; Cuneo, 2007) have advanced so-called “partners-in-crime arguments.” These arguments say that if we are realists about at least one normative domain, there’s no good reason why we shouldn’t also adopt moral realism.
In the section “Normative anti-realism is existentially satisfying (at least it can be),” I focused primarily on examples from ethics. However, I think the same arguments apply analogously to all the versions of irreducible normativity.
For versions of irreducible normativity where we don’t or can’t know to which principles normativity is attached, the partners-in-crime arguments cut both ways. All subtypes of normativity fail for the same reasons.
By contrast, for versions of normativity that depend on claims about a normative domain’s structure, the partners-in-crime arguments don’t even apply. After all, just because philosophers might—hypothetically, under idealized circumstances—agree on the answers to all (e.g.) decision-theoretic questions doesn’t mean that they would automatically also find agreement on moral questions.[30] On this interpretation of realism, all domains have to be evaluated separately.
Acknowledgments
Many people helped me with inputs to this post, but I want to especially thank Sofia Davis-Fogel for her her help with making my writing more intelligible.
My work on this post was funded by the Center on Long-Term Risk.
References
Chalmers, D. (2011). Verbal Disputes. Philosophical Review, 120(4):515–566.
Cuneo, T. (2007). The Normative Web: An Argument for Moral Realism. Oxford: Oxford University Press.
De Lazari-Radek, K. & P. Singer. (2014). The Point of View of the Universe. Oxford: Oxford University Press.
Dennett, D. C. (2008). Some Observations on the Psychology of Thinking about Free Will. In J. Baer, J. C. Kaufman, & R. F. Baumeister (eds.), Are we free? Psychology and free will. Oxford: Oxford University Press. 248–259.
Harris, S. (2010). The Moral Landscape: How Science Can Determine Human Values. New York: Free Press.
Parfit, D. (1984). Reasons and Persons. Oxford: Oxford University Press.
Parfit, D. (2011a). On What Matters, Volume I. Oxford: Oxford University Press.
Parfit, D. (2011b). On What Matters, Volume II. Oxford: Oxford University Press.
Plantinga, A. (1993). Warrant and Proper Function. Oxford: Oxford University Press.
Ruse, M. (2010). The Biological Sciences Can Act as a Ground for Ethics. In Ayala, F. and R. Harp (eds.), Contemporary Debates in Philosophy of Biology. New York: Wiley-Blackwell. 297–315.
Schroeder, M. (2011). Derek Parfit: On What Matters, Volumes 1 and 2. Notre Dame Philosophical Reviews. <ndpr.nd(.)edu/news/on-what-matters-volumes-1-and-2/>.
Shafer-Landau, R. (2003). Moral Realism: A Defense [Kindle version]. Oxford: Oxford University Press. Retrieved from Amazon(.)com.
Singer, P. (1972). Famine, Affluence and Morality. Philosophy and Public Affairs, 1(3):229–243.
Singer, P. (1973). The Triviality of the Debate over "Is-Ought" and the Definition of "Moral". American Philosophical Quarterly, 10(1):51–56.
Singer, P. (2009 [1975]). Animal liberation. New York: Harper Perennial Modern Classics.
Street, S. (2006). A Darwinian Dilemma for Realist Theories of Value. Philosophical Studies,127 (1):109–166.
Williams, B. (1973). A Critique of Utilitarianism. In J.J.C. Smart and B. Williams (eds.), Utilitarianism: For and Against. Cambridge: Cambridge University Press.
Williams, B. (1979). Internal and External Reasons. In R. Harrison (ed.), Rational Action. Cambridge: Cambridge University Press. 101–113.
Wittgenstein, L. (2010(1953)). Philosophische Untersuchungen. Frankfurt am Main: Suhrkamp.
Yetter-Chappell, R. (2017). Knowing What Matters. In P. Singer (ed.), Does Anything Really Matter? Parfit on Objectivity. Oxford: Oxford University Press. 149–167.
For instance, epistemology, decision theory, science (realism about the corrective of scientific explanations or theories), metaphilosophy (realism about the proper way to do philosophy), etc. ↩︎
The more common terminology is “internal reasons” or “instrumental reasons.” See Williams (1979). ↩︎
In moments of agony, we, by definition, wish to be free from suffering. Not wanting to avoid being in agony is not quite motivationally impossible (especially because one can steer toward sources of agony during times when one isn’t yet subject to them). Still, it represents a state where one part of one’s motivational system conflicts with another part. ↩︎
Moral realists might argue that this argument is weaker against versions of moral realism built around intrinsic (dis)value tied directly to positively or negatively valenced states of conscious experience. I will argue against these versions of moral realism in a future post. ↩︎
Alvin Plantinga’s evolutionary argument against naturalism (Plantinga, 1993, ch.12) fails for the same reasons. Plantinga, a theist philosopher, advanced an evolutionary debunking argument against the coherence of the reductionist worldview held by atheists. As noted in the linked Wikipedia article, some components of Plantinga’s original argument—particularly related to his understanding of evolution—appear to have been of dubious quality. However, it seems that a steelmanned version of Plantinga’s argument would work analogously to Street’s argument. Instead of calling into question our ability to grasp moral truths, Plantinga’s argument attacks our ability to understand empirical truths. The best reply to this type of debunking argument is to drop the idea that “truth” is a bedrock concept. If we subscribe to the conception of truth popular on LessWrong (i,e., “truth” as a naturalist and reducible concept), the evolutionary origin of our belief-forming mechanisms poses no threat. According to this conception of truth, having a true belief means nothing more than that there is an in-theory-observable correspondence between our beliefs (“map”) and the world (“territory”). It’s no mystery then that evolution would have equipped us with the ability to form such territory-corresponding beliefs: An organism is usually better off meeting its goals or drives if it can develop accurate beliefs about the world. (Of course, in many instances, it was beneficial to have self-serving but false beliefs, or to bet on crude heuristics even though they may lead us astray under changed circumstances—that’s why we have biases.) ↩︎
Richard Yetter-Chappell provided a particularly interesting reply. In his essay “Knowing What Matters” (Yetter-Chappell, 2017) he argued that to circumvent the evolutionary debunking arguments, we need to assume that we can only acquire moral knowledge if we are equipped with the right psychological dispositions. He further argued that we have to accept that there is no way to reliably tell whether our dispositions are correct. One might be inclined to reject this position outright because it sounds as though having the right dispositions is based on sheer luck.
However, Yetter-Chappell replies that while luck is indeed involved, this doesn’t mean that our chances for being correctly predisposed are tiny. Unlike with lottery numbers, which all have the same prior probability of winning, he argues that we should not assign the same prior probability for every possible psychology. From our present vantage point, introspecting on how our minds are equipped to accept induction and to enable (some of) us to hold nuanced views and change our minds in reaction to new evidence, we can safely assume that things could look a lot worse. Compared to most conceivable psychologies, ours seems like it would be more likely correct than the average.
In many ways, Yetter-Chappell’s arguments remind me of arguments against modest epistemology. People who excel at autonomous thinking may occasionally feel that they are justified to disagree confidently with a particular expert, even in the absence of an outside-view justification for this confidence. On inside-view grounds, their beliefs will appear to be more nuanced and more coherent than (their model of) the expert’s inside view. Of course, many reasoners who reject modest epistemology will crash and burn. Still, those who are actually skilled at inside-view reasoning will often get things right, for the right reasons. As someone who agrees with the arguments against modest epistemology, I can’t help but find Yetter-Chappell’s argument intriguing. At the same time, I don’t think his reply takes all the force out of the evolutionary debunking arguments. Sure, Yetter-Chappell correctly notes that we shouldn’t feel threatened by the possibility of aliens who don’t accept induction or are otherwise incapable of forming nuanced thoughts. But what about the conceivability of aliens who reason about everything the same way as we do, except that they start from different ethical premises? Reasoning about morality not only requires generally-useful reasoning skills, but also some fundamental intuitions about what matters—those intuitions could have evolved in different ways based on contingent directions of the selection pressures in one’s ancestral environment.
(Admittedly, this somewhat merges the evolutionary debunking argument with the argument from widespread moral disagreement. In future posts, I will argue that we don’t need to envision aliens. Even among humans, we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—highly nuanced and versatile.) ↩︎As Ben Garfinkel describes it in his LessWrong article Realism and Rationality:
"[...] if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true. Belief in anti-realism seems to undermine itself." ↩︎It can change something indirectly. E.g., an anti-realist is more likely to give up parsimony to save some moral intuition that they really like. ↩︎
People with strong views about norms in the context of interpersonal relationships have to do this all the time or risk becoming impossible to work or live with. For example, I may have a strong intuition that it’s objectively wrong for my co-workers to do things in their preferred way, as opposed to my preferred way. Still, I have a choice: I can treat this normatively valenced intuition as a personal belief and (try to) agree to disagree, or I can reify the intuition and treat it as my judgment about a speaker-independent fact. In the latter case, I can’t agree to disagree (at least not without some degree of condescension). ↩︎
I plan to address this option in the upcoming posts “4. Why the Wager for Moral Realism Fails” and “5. Metaethical Fanaticism (Dialogue).” ↩︎
As Williams emphasizes in his critique, he doesn’t necessarily (or primarily) disagree with the conclusions of utilitarianism, but with the way they are derived. In particular, he does not like the idea that utilitarian considerations are the only (or the primary) moral considerations for a person to consider. ↩︎
If this sentiment doesn’t quite resonate with some readers, but they would like to read more about it, I recommend two blog posts by Nate Soares: "On caring" and "Altruistic motivations". ↩︎
For example, I was inspired by the essay “Some observations on the psychology of thinking about free will” (Dennett, 2008). Dennett’s paper might be interesting because of the parallelism between anti-realism about metaethics and compatibilism about free will (both of them arguably being more palatable than one may at first think). ↩︎
For instance, moral uncertainty works differently under normative anti-realism. I consider this to be a benefit instead of a drawback because the option space for moral anti-realists is more nuanced. There are strong arguments in favor of valuing reflection, but also arguments to trust at least some of one’s object-level normative intuitions more than one’s instincts on how to set up a safe reflection process. Peer disagreement also functions differently under normative anti-realism. While realists should only consider someone’s philosophical expertise before deciding how much to update on the other person’s views, normative anti-realists would do well to also assess that person’s most fundamental intuitions to see whether these are sufficiently compatible with theirs to warrant updating. ↩︎
For a related discussion, see the thought experiments in Joe Carlsmith's post The ignorance of normative realism bot. In particular, the thought experiment most analogous to “super-reasons” is the one at the start of section III., Does the frosting exist? ↩︎
“Incorrectly predisposed” is an under-defined term in this context, which makes this version of irreducible normativity is not fully specified either. ↩︎
Arguably, the described notion of irreducible normativity still differs from my anti-realism in an important sense. At least in theory, irreducible normativity postulates a single correct way to “fill in the gaps,” once we go beyond self-evident principles. However, on close inspection, that seems to be little more than a trick of words. Only very few philosophical principles are universally considered self-evident. Everything else would remain under-determined as far as our attainable knowledge of normativity is concerned. If we are forever closed off to the correct solution, it is of no use to even believe that there’s a “single correct way to fill in the blanks.” Since we can’t make any progress toward the correct solution, we’ll have to use other decision criteria. In practice, moral realists of this kind will have to resort to subjective reasons—just like the anti-realists. ↩︎
For instance, Shafer-Landau puts a lot of emphasis on self-evidence as the means to obtain moral knowledge (Shafer-Landau, 2003, pos. 1374):
"We would go far in responding to sceptical worries if we could defend the existence of self-evident moral principles. I believe that we can." As a moral particularist, he also doesn’t see the need to connect self-evident principles into an overarching and complete moral theory. About the content of moral principles (“moral laws”), he writes the following (pos.1494–1496):
"I think it likely that every one of them will incorporate a ceteris paribus clause, though establishing that point is extremely difficult and a matter of fundamental normative ethical theory, and so beyond our present [s]cope." ↩︎I’m echoing a sentiment expressed in Mark Schroeder’s (2011) review of Parfit’s On What Matters (1&2):
"[A]ccording to [Parfit], few people who have ever contributed to the literature on metaethics even have the conceptual resources required to disagree with him.
Bernard Williams, for example, turns out to lack the concept of a reason. John Mackie turns out to fail to have thoughts about morality, rather than to believe that nothing is wrong. Christine Korsgaard lacks normative concepts. Simon Blackburn and Allan Gibbard's disagreement with Parfit? That's superficial, too – they don't have normative concepts either. I'm flattered to report that I am among the few metaethicists whom Parfit credits as sharing the required conceptual repertoire to disagree with him." ↩︎In particular, my critique of Harris’ moral realism version is that it doesn’t quite seem worthy of the name (according to my terminology), because I felt that all he’s argued for in his otherwise excellent book is that some moral principles are self-evident. (And also that morality is importantly connected to consciousness and well-being.) From this, we cannot yet infer the existence of a uniquely correct and complete normative theory. ↩︎
Perhaps there are options I am missing. It is difficult to comprehensively argue against the merits of a concept that cannot be crisply defined. ↩︎
The reasons I primarily associated this way of thinking about normativity with moral naturalism are best described in this quote by Russ Shafer-Landau (2003, pos. 1260–1262):
"[N]aturalists might defend a particular model of moral theorizing (e.g. Pettit and Jackson's moral functionalism, or Harsanyi's utilitarianism, or Harman's relativism), and claim that the correct application of such a theory yields the surprising conclusion that, for every moral property, there is a non-gerrymandered descriptive property that is identical to it."
It is particularly the part about locating a “non-gerrymandered descriptive property” that captures how I’m thinking about it. ↩︎Concepts such as “game” or “language” may seem impossible to define with words in practice, but they are definable in theory. We could construct satisfactory verbal definitions of these concepts if we could use disjunctions and as many words as there are atoms in our local galaxy cluster. Consider cat pictures: Modern image-classification algorithms trained for cat detection successfully quantified “catness.” The evaluation criteria behind these algorithms might be enormously demanding to spell out. Still, at its core, a fully trained image-classification algorithm does little more than transforming image data into a giant matrix of possible attributes to evaluate those attributes according to whether they score sufficiently high to pass the threshold on “catness.” That metric was trained from human-labeled examples—arguably not too different from the way babies pick up language from their parents. (There’s also no single best way to extract features from examples—hyperparameter tuning illustrates how machine learning is a bit of an art, with a degree of arbitrariness.) ↩︎
There are limited exceptions, of course. For instance, if a trustworthy teacher guaranteed us that a given concept successfully refers to a speaker-independent property, we could start using it in meaningful ways even before fully understanding the concept’s meaning. However, in this example, it’s crucial that at least the trustworthy teacher thoroughly understands the concept. If said teacher were also deferring to someone else, and that person were to defer to others yet again, then the concept may turn out to be ill-grounded after all. ↩︎
One might argue that “meaningless” and “forever unknowable even in theory” sound like the same thing. Even if these two notions are somehow different, I’d say we can ignore things that are forever unknowable. ↩︎
See, e.g., this section in Ben Garfinkel’s LessWrong post “Realism and Rationality.” ↩︎
Normative realism remains compatible with narrower notions of uncertainty, such as uncertainty about what follows from axiomatic principles. Similarly, giving up on realism still allows the option of valuing further moral reflection. There are strong arguments for this on anti-realist grounds, though the specifics work differently from the way moral realists think about moral uncertainty (see also my comments in endnote 14). I plan to address moral reflection from an anti-realist perspective in a future post. ↩︎
Parfit considered it vital to his philosophical project to show that disagreement among ethicists is not as wide-reaching as it’s commonly described. He argued that Kantianism, consequentialism, and contractualism are three different ways of “climbing the same mountain” (Parfit, 2011a). I don’t know Parfit’s work deeply enough to know his thoughts on normative uncertainty. Still, I’d say there’s a plausible reading of his “climbing the same mountain” analogy, and the importance he attaches to it in the context of his life’s work, that stands in some tension with open-ended notions of moral uncertainty. I could imagine that Parfit was aware of this, that this is precisely why he attached such importance to convergence arguments. Besides, while Parfit is sometimes considered a non-naturalist, his position seems atypical in that regard, since it seemingly makes some concessions to moral naturalism (in terms the moral reality having some entanglement with our evaluative attitudes.) ↩︎
As I have outlined my first post, I am particularly interested in versions of moral naturalism that, should they prove correct, will be highly relevant to people’s lives and life projects. I have tried to sum up the desiderata for this with the concept One Compelling Axiology. As I envision, this would combine a specific, complete theory about what objectively is in someone’s interest, or is good or bad for them, with a specific, complete theory of what it means to do good for others from a kind of “impartial perspective.” ↩︎
I expect that many deep-seated disagreements in epistemology or decision theory won’t go away even under the perfect conditions for philosophical reflection. However, I’m not particularly attached to that claim. In the course of this sequence, I only want to argue for anti-realism about morality. ↩︎
Glacian @ 2020-06-11T02:59 (+28)
In the section on evolutionary debunking arguments, you state morality is an adaptation, and later suggest that “The intuition that morality is objective (‘speaker-independent’) is a part of this adaptation.”
I believe you could present a clearer explanation of what you mean by (1) the claim that morality is an adaptation and (b) the intuition that morality is objective. Depending on what you mean, both claims may not be supported by the evidence.
(1) Moral cognition may not have evolved
With respect to the claim that morality evolved, Mallon & Machery (2010) provide at least three interpretations of what this could mean:
(a) Some components of moral psychology evolved
(b) normative cognition evolved
(c) moral cognition, “understood as a special sort of cognition” (p. 4), evolved.
They provide what strikes me as a fairly persuasive case that (a) is uncontroversially true, (b) is probably true, but (c) isn’t well-supported by available data.
Only (c) would easily support EDAs, while (b) may not and whether (a) could support EDAs would presumably depend on the details.
In subsequent papers, Machery (2018) and Stich (2018) have developed on this and related criticisms, arguing that morality is a culturally-contingent phenomenon and that there is no principled distinction between moral and nonmoral norms, respectively (see also Sinnott-Armstrong & Wheatley, 2012).
Given that you don’t take EDAs to be decisive, and given that (a) above may be sufficient to support relevant forms of EDAs, these concerns may not present much of an obstacle to your overall argument for antirealism, but I wanted to ensure you were aware that EDAs may be based on empirical claims that have yet to be uncontroversially established by available data.
(2) People may not be intuitive objectivists
A growing number of studies have attempted to evaluate the metaethical stances of nonphilosophers (e.g. Beebe & Sackris, 2016, Beebe et al., 2015; Collier-Spruel et al., 2019; Goodwin & Darley, 2008; Nichols, 2004; Wright, Grandjean, & McWhite, 2013; Yilmaz & Bahçekapili, 2015; Zijlstra, 2019).
Across a range of different paradigms and ways of asking, researchers have found evidence of both interpersonal and intrapersonal variation in metaethical stances: there are stable differences between participants in the degree to which they regard morality as objective, and there is variation in metaethical stance for individual participants with respect to different moral issues, such that some are treated as objective and others are not, with some findings indicating people often endorse non-cognitivism (Davis, forthcoming).
In other words, some people tend to be “more objectivist” and others tend to be “more relativist” overall regarding different moral issues. Yet at the same time, most people will treat some moral issues (e.g. murder) as “objective” and others (e.g. abortion) as “relative.”
Although there are significant methodological problems with this research (Bush & Moss, 2020; Pölzler, 2018), overall there is not strong evidence of a species-typical tendency to regard moral norms as uniformly objective.
There is a bit of cross cultural research on this (see Beebe et al., 2015) but nothing very impressive or compelling. It is possible that people evolved to treat morality as objective but that WEIRD populations exhibit an unusual tendency towards non-objectivist views of morality., or more generally, it could be that we are predisposed towards objectivism but this can be culturally overridden.
It’s also possible that these studies are reliably failing to measure metaethical beliefs accurately.
Even so, what evidence we do have does not vindicate the assumption that ordinary people are objectivists. In fact, some researchers working on the topic believe folk nonobjectivism makes better sense of the data (see Beebe, forthcoming).
References
Beebe, J.R. (forthcoming). The empirical case for folk indexical moral relativism. In Oxford studies in experimental philosophy (vol. 4).
Beebe, J.R., Sackris D. (2016). Moral objectivism across the lifespan. Philosophical Psychology 29 (6): 912–929.
Beebe, J.R., Qiaoan R., Wysocki T. et al. (2015) Moral objectivism in cross-cultural perspective. Journal of Cognition and Culture, 15(3–4): 386–401.
Bush L.S., & Moss. D., (2020). Misunderstanding metaethics: Difficulties measuring folk objectivism and relativism. Diametros, 1-16. https://doi.org/10.33392/diam.1495
Collier‐Spruel, L., Hawkins, A., Jayawickreme, E., Fleeson, W., & Furr, R. M. (2019). Relativism or tolerance? Defining, assessing, connecting, and distinguishing two moral personality features with prominent roles in modern societies. Journal of personality, 87(6), 1170-1188.
Davis, T. (forthcoming). Beyond objectivism: New methods for studying metaethical intuitions. Philosophical Psychology.
Goodwin, G. P., & Darley, J. M. (2008). The psychology of meta-ethics: Exploring objectivism. Cognition, 106(3), 1339-1366.
Machery, E. (2018). A Historical Invention. In K. Gray & J. Graham (Eds.), Atlas of Moral Psychology (pp. 259-265). Guilford Press.
Machery, E., & Mallon, R. (2010). Evolution of morality. In J. M. Doris et al. (Eds.), The moral psychology handbook (pp. 3–47). New York, NY: Oxford University Press.
Nichols, S. (2004). After objectivity: An empirical study of moral judgment. Philosophical Psychology, 17(1), 3-26.
Pölzler, T. (2018). How to measure moral realism. Review of philosophy and psychology, 9(3), 647-670.
Sinnott-Armstrong, W., & Wheatley, T. (2012). The disunity of morality and why it matters to philosophy. The Monist, 95(3), 355-377.
Stich, S. (2018). The moral domain. In K. Gray & J. Graham (Eds.), Atlas of Moral Psychology (pp. 547-555). Guilford Press.
Wright, J. C., Grandjean, P. T., & McWhite, C. B. (2013). The meta-ethical grounding of our moral beliefs: Evidence for meta-ethical pluralism. Philosophical Psychology, 26(3), 336-361.
Yilmaz, O., & Bahçekapili, H. G. (2015). Without God, everything is permitted? The reciprocal influence of religious and meta-ethical beliefs. Journal of Experimental Social Psychology, 58, 95-100.
Zijlstra, L. (2019). Folk moral objectivism and its measurement. Journal of Experimental Social Psychology, 84, 103807.
Lukas_Gloor @ 2020-06-12T14:15 (+6)
Thanks for this comment, this type of empirical metaethics research is quite new to me and it sounds really fascinating!
(1) Moral cognition may not have evolved
With respect to the claim that morality evolved, Mallon & Machery (2010) provide at least three interpretations of what this could mean:
(a) Some components of moral psychology evolved
(b) normative cognition evolved
(c) moral cognition, “understood as a special sort of cognition” (p. 4), evolved.
They provide what strikes me as a fairly persuasive case that (a) is uncontroversially true, (b) is probably true, but (c) isn’t well-supported by available data.
Only (c) would easily support EDAs, while (b) may not and whether (a) could support EDAs would presumably depend on the details.
In subsequent papers, Machery (2018) and Stich (2018) have developed on this and related criticisms, arguing that morality is a culturally-contingent phenomenon and that there is no principled distinction between moral and nonmoral norms, respectively (see also Sinnott-Armstrong & Wheatley, 2012).
You say that only (c) would easily support EDAs. Is this because of worries that EDAs would be too strong if they also applied against normative cognition in general? If yes, I think this point might be (indirectly) covered by my thoughts in footnote 5. I would argue that EDAs go through for all domains of irreducible normativity, not just ethics. But as I said, I haven't given this much thought, so I might be missing why (c) is needed for EDAs against moral cognition to go through. I have bookmarked the paper you cited and will investigate why the authors think this. (Edit: Not sure I'll be able to easily access the text, though.)
Glacian @ 2020-06-13T15:39 (+19)
Is this because of worries that EDAs would be too strong if they also applied against normative cognition in general?
That would be one of the worries. If (c) distinctly moral cognition evolved, EDAs apply straightforwardly. If (b) normative cognition evolved, then there’d be a serious worry that they apply to normative cognition in general, and then you’d need to bring in the sorts of reasons you address in the footnote. If (a) processes involved in moral cognition evolved, then similar worries to (b) may arise insofar as the psychological systems involved in moral cognition are also involved in relevant nonmoral domains (e.g. epistemic norms).
What complicates matters if (a) is true and (c) is not is that we cannot present a uniform debunking argument against distinctively moral cognition by simply noting that moral cognition evolved; we’d have to look at the distinct evolutionary history of each of the processes involved in moral cognition.
Note, for instance, that if it is not the case that we have an evolved tendency to regard moral facts as distinctively objective, then EDAs that turn on this hypothesis will be based on a mistaken presupposition about the etiology of realist intuitions.
That is, you could be incorrect about the empirical facts when you agree with Ruse that “our intuitions in favor of moral realism evolved for reasons that have no connection to whether the position is true,” simply because it could be untrue that our intuitions in favor of moral realism evolved.
In short, if moral judgments are the output of more general systems for reasoning, prediction etc., it’s unclear if or how EDAs would apply to these systems. For instance, it may be that realist intuitions are not an output of a dedicated psychological process, instead, a result of general inferential processes that are not as straightforwardly subject to the concerns raised by standard EDAs.
FitzPatrick (2015) has raised some objections to EDAs that could readily draw on the status of empirical facts surrounding the evolution of morality. FitzPatrick claims that:
“[...] we don’t need natural selection to have given us cognitive capacities designed specifically to track a certain class of truths, on the model of perceptual adaptations, in order to be in a position now to track those truths non-accidentally and reliably, and to be warranted in our beliefs. Nor do we even need natural selection to have given us, as an incidental by-product of some unrelated adaptation, a ready-made, specialized capacity that happens to be attuned to the truths in question. Such a thing would indeed be as unlikely as natural selection’s coughing up the human eye as a fluke by-product of some unrelated adaptation. But again we don’t need any such thing. It’s enough if natural selection has given us general cognitive capacities that we can now develop and deploy in rich cultural contexts, with training in relevant methodologies, so as to arrive at justified and accurate beliefs in that domain.” (pp. 886-887)
The points raised here are less of a problem for the normative antirealist. But they do raise concerns about the specific etiology of our intuitions about moral realism and any other kind of realism. At the very least, I’d caution against the presumption that a given tendency to think about morality in a certain way has a distinctive evolutionary origin: you cannot dismiss realist intuitions on the grounds that they evolved if they didn’t evolve.
Inadequate consideration of the details of the evolution of normative and moral thought could misleadingly appear to close the door on some moves that are available to the moral antirealist. For instance, the evolutionary details will matter for anyone who wants to maintain skepticism about moral realism while still endorsing realism about other domains (e.g. epistemic norms). My colleagues and I discuss some of these possibilities in Millhouse et al. (2016), though I can’t speak for my coauthors and my comments here may conflict with what they think.
References
FitzPatrick, W. J. (2015). Debunking evolutionary debunking of ethical realism. Philosophical Studies, 172(4), 883-904.
Millhouse, T., Bush, L. S., & Moss, D. (2016). The containment problem and the evolutionary debunking of morality. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of morality (pp. 113-135). Springer, Cham.
Aaron Gertler @ 2020-06-09T23:14 (+12)
I'd recommend labeling your titles using the name of this series of posts, rather than only with numbers. For example:
Moral anti-realism #3: Against Irreducible Normativity
Starting titles with a numeral looks a bit wonky whenever the Forum has a list of posts displayed.
Also, I'm really happy to see you keep releasing these posts! I look forward to when we release the public version of our sequencing feature so that they can be saved in that format.
Lukas_Gloor @ 2020-06-11T10:22 (+10)
That makes sense! I'll try to change the titles tomorrow (I hope I won't make a mess out of it:)).
SammyDMartin @ 2020-07-23T15:35 (+11)
This is an interesting post, and I have a couple of things to say in response. I'm copying over the part of my shortform that deals with this:
Normative Realism by degrees
Further to the whole question of Normative / moral realism, there is this post on Moral Anti-Realism. While I don't really agree with it, I do recommend reading it - one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don't make ethical claims beyond 'self-evident' ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don't have enough 'material to work with' for your theory to plausibly refer to anything. The Moral Anti-Realism post presents this dilemma for the moral realist:
There are instances where just a handful of examples or carefully selected “pointers” can convey all the meaning needed for someone to understand a far-reaching and well-specified concept. I will give two cases where this seems to work (at least superficially) to point out how—absent a compelling object-level theory—we cannot say the same about “normativity.”
...these thought experiments illustrate that under the right circumstances, it’s possible for just a few carefully selected examples to successfully pinpoint fruitful and well-specified concepts in their entirety. We don’t have the philosophical equivalent of a background understanding of chemistry or formal systems... To maintain that normativity—reducible or not—is knowable at least in theory, and to separate it from merely subjective reasons, we have to be able to make direct claims about the structure of normative reality, explaining how the concept unambiguously targets salient features in the space of possible considerations. It is only in this way that the ambitious concept of normativity could attain successful reference. As I have shown in previous sections, absent such an account, we are dealing with a concept that is under-defined, meaningless, or forever unknowable.
The challenge for normative realists is to explain how irreducible reasons can go beyond self-evident principles and remain well-defined and speaker-independent at the same time.
To a large degree, I agree with this claim - I think that many moral realists do as well. Convergence type arguments often appear in more recent metaethics (Hare and Parfit are in those previous lists) - so this may already have been recognised. The post discusses such a response to antirealism at the end:
I titled this post “Against Irreducible Normativity.” However, I believe that I have not yet refuted all versions of irreducible normativity. Despite the similarity Parfit’s ethical views share with moral naturalism, Parfit was a proponent of irreducible normativity. Judging by his “climbing the same mountain” analogy, it seems plausible to me that his account of moral realism escapes the main force of my criticism thus far.
But there's one point I want to make which is in disagreement with that post. I agree that how much you can concretely say about your supposed mind-independent domain of facts affects how plausible its existence should seem, and even how coherent the concept is, but I think that this can come by degrees. This should not be surprising - we've known since Quine and Kripke that you can have evidential considerations for/against and degrees of uncertainty about a priori questions. The correct method in such a situation is Bayesian - tally the plausibility points for and against admitting the new thing into your ontology. This can work even if we don't have an entirely coherent understanding of normative facts, as long as it is coherent enough.
Suppose you're an Ancient Egyptian who knows a few practical methods for trigonometry and surveying, doesn't know anything about formal systems or proofs, and someone asks you if there are 'mathematical facts'. You would say something like "I'm not totally sure what this 'maths' thing consists of, but it seems at least plausible that there are some underlying reasons why we keep hitting on the same answers". You'd be less confident than a modern mathematician, but you could still give a justification for the claim that there are right and wrong answers to mathematical claims. I think that the general thrust of convergence arguments puts us in a similar position with respect to ethical facts.
If we think about how words obtain their meaning, it should be apparent that in order to defend this type of normative realism, one has to commit to a specific normative-ethical theory. If the claim is that normative reality sticks out at us like Mount Fuji on a clear summer day, we need to be able to describe enough of its primary features to be sure that what we’re seeing really is a mountain. If all we are seeing is some rocks (“self-evident principles”) floating in the clouds, it would be premature to assume that they must somehow be connected and form a full mountain.
So, we don't see the whole mountain, but nor are we seeing simply a few free-floating rocks that might be a mirage. Instead, what we see is maybe part of one slope and a peak.
Let's be concrete, now - the 5 second, high level description of both Hare's and Parfit's convergence arguments goes like this:
If we are going to will the maxim of our action to be a universal law, it must be, to use the jargon, universalizable. I have, that is, to will it not only for the present situation, in which I occupy the role that I do, but also for all situations resembling this in their universal properties, including those in which I occupy all the other possible roles. But I cannot will this unless I am willing to undergo what I should suffer in all those roles, and of course also get the good things that I should enjoy in others of the roles. The upshot is that I shall be able to will only such maxims as do the best, all in all, impartially, for all those affected by my action. And this, again, is utilitarianism.
and
An act is wrong just when such acts are disallowed by some principle that is optimific, uniquely universally willable, and not reasonably rejectable
In other words, the principles that (whatever our particular wants) would produce the best outcome in terms of satisfying our goals, could be willed to be a universal law by all of us and would not be rejected as the basis for a contract, are all the same principles. That is at least suspicious levels of agreement between ethical theories. This is something substantive that can be said - out of every major attempt to get at a universal ethics that has in fact been attempted in history: what produces the best outcome, what can you will to be a universal law, what would we all agree on, seem to produce really similar answers.
The particular convergence arguments given by Parfit and Hare are a lot more complex, I can't speak to their overall validity. If we thought they were valid then we'd be seeing the entire mountain precisely. Since they just seem quite persuasive, we're seeing the vague outline of something through the fog, but that's not the same as just spotting a few free-floating rocks.
Now, run through these same convergence arguments but for decision theory and utility theory, and you have a far stronger conclusion. there might be a bit of haze at the top of that mountain, but we can clearly see which way the slope is headed.
This is why I think that ethical realism should be seen as plausible and realism about some normative facts, like epistemic facts, should be seen as more plausible still. There is some regularity here in need of explanation, and it seems somewhat more natural on the realist framework.
I agree that this 'theory' is woefully incomplete, and has very little to say about what the moral facts actually consist of beyond 'the thing that makes there be a convergence', but that's often the case when we're dealing with difficult conceptual terrain.
From Ben's post:
I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious. What is this mysterious property of “should-ness” that certain actions are meant to possess -- and why would our intuitions about which actions possess it be reliable? But I am also very sympathetic to realism and, in practice, tend to reason about normative questions as though I was a full-throated realist.
From the perspective of x, x is not self-defeating
From the antirealism post, referring to the normative web argument:
It’s correct that anti-realism means that none of our beliefs are justified in the realist sense of justification. The same goes for our belief in normative anti-realism itself. According to the realist sense of justification, anti-realism is indeed self-defeating.
However, the entire discussion is about whether the realist way of justification makes any sense in the first place—it would beg the question to postulate that it does.
Sooner or later every theory ends up question-begging.
From the perspective of Theism, God is an excellent explanation for the universe's existence since he is a person with the freedom to choose to create a contingent entity at any time, while existing necessarily himself. From the perspective of almost anyone likely to read this post, that is obvious nonsense since 'persons' and 'free will' are not primitive pieces of our ontology, and a 'necessarily existent person' makes as much sense as 'necessarily existent cabbage'- so you can't call it a compelling argument for the atheist to become a theist.
By the same logic, it is true that saying 'anti-realism is unjustified on the realist sense of justification' is question-begging by the realist. The anti-realist has nothing much to say to it except 'so what'. But you can convert that into a Quinean, non-question begging plausibility argument by saying something like:
We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms, the other in which there are mind-independent facts about which of our beliefs are justified, and the latter is a more plausible, parsimonious account of the structure of our beliefs.
This won't compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.
Lukas_Gloor @ 2020-07-25T14:47 (+3)
[...] one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don't make ethical claims beyond 'self-evident' ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don't have enough 'material to work with' for your theory to plausibly refer to anything.
Cool, I'm happy that this argument appeals to a moral realist!
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
In short, I don't think of myself as a moral realist because I see strong reasons against convergence about moral axiology and population ethics.
This won't compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.
I don't think this argument ("anti-realism is self-defeating") works well in this context. If anti-realism is just the claim "the rocks or free-floating mountain slopes that we're seeing don't connect to form a full mountain," I don't see what's self-defeating about that.
One can try to say that a mistaken anti-realist makes a more costly mistake than a mistaken realist. However, on close inspection, I argue that this intuition turns out to be wrong. It also depends a lot on the details. Consider the following cases:
(1) A person with weak object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:
(1a) free-floating rocks or parts of mountain slope, with a lot of fog and clouds.
(1b) many (more or less) full mountains, all of which are similarly appealing. The view feels disorienting.
(2) A person with strong object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:
(2a) a full mountain with nothing else of note even remotely in the vicinity.
(2b) many (more or less) full mountains, but one of which is definitely theirs. All the other mountains have something wrong/unwanted about them.
2a is confident moral realism. 2b is confident moral anti-realism. 1a is genuine uncertainty, which is compatible with moral realism in theory, but there's no particular reason to assume that the floating rocks would connect. 1b is having underdefined values.
Of course, how things appear to someone may not reflect how they really are. We can construct various types of mistakes that people in the above examples might be making.
This requires longer discussion, but I feel strongly that someone whose view is closest to 2b has a lot to lose by trying to change their psychology into something that lets them see things as 1a or 1b instead. They do have something to gain if 1a or 1b are actually epistemically warranted, but they also have stuff to lose. And the losses and gains here are commensurate – I tried to explain this in endnote 2 of my fourth post. (But it's a hastily written endnote and I would have ideally written a separate post about just this issue. I plan to touch on it again in a future post on how anti-realism changes things for EAs.)
Lastly, it's worth noting that sometimes people's metaethics interact with their normative ethics. A person might not adopt a mindset of thinking about or actually taking stances on normative questions because they're in the habit of deferring to others or waiting until morality is solved. But if morality is a bit like career choice, then there are things to lose from staying indefinitely uncertain about one's ideal career, or just going along with others.
To summarize: There's no infinitely strong wager for moral realism. There is an argument for valuing moral reflection (in the analogy: gaining more clarity on the picture that you're seeing, and making sure you're right about what you think you're seeing). However, the argument for valuing moral reflection is not overridingly strong. It is to be traded off against one's the strength of one's object-level normative opinions. And without object-level normative opinions, one's values might be underdetermined.
SammyDMartin @ 2020-07-28T21:12 (+5)
You've given me a lot to think about! I broadly agree with a lot of what you've said here.
I think that it is a more damaging mistake to think moral antirealism is true when realism is true than vice versa, but I agree with you that the difference is nowhere near infinite, and doesn't give you a strong wager.
However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
Epistemic anti-realism
Cool, I'm happy that this argument appeals to a moral realist! ....
...I don't think this argument ("anti-realism is self-defeating") works well in this context. If anti-realism is just the claim "the rocks or free-floating mountain slopes that we're seeing don't connect to form a full mountain," I don't see what's self-defeating about that...
To summarize: There's no infinitely strong wager for moral realism.
I agree that there is no infinitely strong wager for moral realism. As soon as moral realists start making empirical claims about the consequences of realism (that convergence is likely), you can't say that moral realism is true necessarily or that there is an infinitely strong prior in favour of it. An AI that knows that your idealised preferences don't cohere could always show up and prove you wrong, just as you say. If I were Bob in this dialogue, I'd happily concede that moral anti-realism is true.
If (supposing it were the case) there were not much consensus on anything to do with morality ("The rocks don't connect..."), someone who pointed that out and said 'from that I infer that moral realism is unlikely' wouldn't be saying anything self-defeating. Moral anti-realism is not self-defeating, either on its own terms or on the terms of a 'mixed view' like I describe here:
We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms, the other in which there are mind-independent facts about which of our beliefs are justified...
However, I do think that there is an infinitely strong wager in favour of normative realism and that normative anti-realism is self-defeating on the terms of a 'mixed view' that starts out considering the two alternatives like that given above. This wager is because of the subset of normative facts that are epistemic facts.
The example that I used was about 'how beliefs are justified'. Maybe I wasn't clear, but I was referring to beliefs in general, not to beliefs about morality. Epistemic facts, e.g. that you should believe something if there is sufficient amount of evidence, are a kind of normative fact. You noted them on your list here.
So, the infinite wager argument goes like this -
1) On normative anti-realism there are no facts about which beliefs are justified. So there are no facts about whether normative anti-realism is justified. Therefore, normative anti-realism is self-defeating.
Except that doesn't work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don't question, which means that holding a belief without (the realist's notion of) justification is consistent with anti-realism.
So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
Evidence for epistemic facts?
I find it interesting the imagined scenario you give in #5 essentially skips over argument 2) as something that is impossible to judge:
AI: Only in a sense I don’t endorse as such! We’ve gone full circle. I take it that you believe that just like there might be irreducibly normative facts about how to do good, the same goes for irreducible normative facts about how to reason?
Bob: Indeed, that has always been my view.
AI: Of course, that concept is just as incomprehensible to me.
The AI doesn't give evidence against there being irreducible normative facts about how to reason, it just states it finds the concept incoherent, unlike the (hypothetical) evidence that the AI piles on against moral realism (for example, that people's moral preferences don't cohere).
Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don't care about the realist's sense of 'self-defeating'. The AI is in the latter camp, but not because of evidence, the way that it's a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it's constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren't comprehensible to it, it only has access to argument 1), which doesn't work. It can't imagine 2).
However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren't sure if it applies - and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn't be justified.
However, this doesn't establish moral realism - as you said earlier, moral anti-realism is not self-defeating.
If anti-realism is just the claim "the rocks or free-floating mountain slopes that we're seeing don't connect to form a full mountain," I don't see what's self-defeating about that
Combining convergence arguments and the infinite wager
If you want to argue for moral realism, then you need evidence for moral realism, which comes in the form of convergence arguments. But the above argument is still relevant, because the convergence and 'infinite wager' arguments support each other.
The reason 2) would be bolstered by the success of convergence arguments (in epistemology, or ethics, or any other normative domain) is that convergence arguments increase our confidence that normativity is a coherent concept - which is what 2) needs to work. It certainly seems coherent to me, but this cannot be taken as self-evident since various people have claimed that they or others don't have the concept.
I also think that 2) is some evidence in favour of moral realism, because it undermines some of the strongest antirealist arguments.
By contrast, for versions of normativity that depend on claims about a normative domain’s structure, the partners-in-crime arguments don’t even apply. After all, just because philosophers might—hypothetically, under idealized circumstances—agree on the answers to all (e.g.) decision-theoretic questions doesn’t mean that they would automatically also find agreement on moral questions.[29] On this interpretation of realism, all domains have to be evaluated separately
I don't think this is right. What I'm giving here is such a 'partners-in-crime' argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the 'queerness argument' that normative facts are incoherent or too strange to be allowed into our ontology. The 'partners-in-crime'/'infinite wager' undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough - depending on the details.
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
So, with all that out of the way, when we start discussing the convergence arguments, the burden of proof on them is not colossal. If we already have reason to suspect that there are normative facts out there, perhaps some of them are moral facts. But if we found a random morass of different considerations under the name 'morality' then we'd be stuck concluding that there might be some normative facts, but maybe they are only epistemic facts, with nothing else in the domain of normativity.
I don't think this is the case, but I will have to wait until your posts on that topic - I look forward to them!
All I'll say is that I don't consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement. (I say elements because realism is not all-or-nothing - there could be an objective 'core' to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.) If Kant could have been a utilitarian and never realised it, then those who are appalled by the repugnant conclusion could certainly converge to accept it after enough ideal reflection!
Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.
Lukas_Gloor @ 2020-08-06T09:36 (+3)
This discussion continues to feel like the most productive discussion I've had with a moral realist! :)
However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
[...]
So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
[...]
Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don't care about the realist's sense of 'self-defeating'. The AI is in the latter camp, but not because of evidence, the way that it's a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it's constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren't comprehensible to it, it only has access to argument 1), which doesn't work. It can't imagine 2).
I think I agree with all of this, but I'm not sure, because we seem to draw different conclusions. In any case, I'm now convinced I should have written the AI's dialogue a bit differently. You're right that the AI shouldn't just state that it has no concept of irreducible normative facts. It should provide an argument as well!
What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text):
(1) Is irreducible normativity about super-reasons?
(2) Is (our knowledge of) irreducible normativity confined to self-evident principles?
(3) Is there a speaker-independent normative reality?
I think you're inclined to agree with me that (1) and (2) are unworkable or not worthy of the term "normative realism." Also, it seems like there's a weak sense in which you agree with the points I made in (3), as it relates to the domain of morality.
But maybe you only agree with my points in (3) in a weak sense, whereas I consider the arguments in that section to have stronger implications. The way I thought about this, I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven't yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners. Doesn't this pin down the concept of irreducible normativity in a way that blocks any infinite wagers? It doesn't feel like proper non-naturalism anymore once you postulate this link as a conceptual necessity. "Normativity" became a much more mundane concept after we accepted this link.
However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren't sure if it applies - and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn't be justified.
The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don't see alternatives to my suggestions (1), (2) and (3).
What I'm giving here is such a 'partners-in-crime' argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the 'queerness argument' that normative facts are incoherent or too strange to be allowed into our ontology. The 'partners-in-crime'/'infinite wager' undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough - depending on the details.
Since I don't think we have established anything interesting about normative facts, the only claim I see in the vicinity of what you say in this paragraph, would go as follows:
"Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn't be too surprised if morality works similarly."
And I kind of agree with that, but I don't know how much convergence I would expect in epistemology. (I think it's plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)
All I'll say is that I don't consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement.
I agree with this. My confidence that convergence won't work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes "legitimate" because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology). If one actually thinks about what sorts of assumptions are required for the discussions not to get stuck (something like: "all humans would adopt the same broad types of life goals under idealized conditions"), many people would probably recognize that those assumptions are extremely strong and counterintuitive. Oddly enough, people often don't seem to think that far because they self-identify as moral realists for reasons that don't make any sense. They expect convergence on moral questions because they somehow ended up self-identifying as moral realists, instead of them self-identifying as moral realists because they expect convergence.
(I'll maybe make another comment later today to briefly expand on my line of argument here.)
(I say elements because realism is not all-or-nothing - there could be an objective 'core' to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)
I also agree with that, except that I think axiology is the one place where I'm most confident that there's no convergence. :)
Maybe my anti-realism is best described as "some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined."
(I thought "anti-realism" was the best description for my view, because as I discussed in this comment, the way in which I treat normative concepts takes away the specialness they have under non-naturalism. Even some non-naturalists claim that naturalism isn't interesting enough to be called "moral realism." And insofar as my position can be characterized as naturalism, it's still underdetermined in places where it matters a lot for our ethical practice.)
Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.
When I read some similar passage at the end of Parfit's Reasons and Persons (which may have even included a quote of this passage?), I shared Parfit's view. But I've done a lot of thinking since then. At some point one also has to drastically increase one's confidence that further game-changing considerations won't show up, especially if one's map of the option space feels very complete in a self-contained way, and intellectually satisfying.
SammyDMartin @ 2020-08-27T17:03 (+3)
This discussion continues to feel like the most productive discussion I've had with a moral realist! :)
Glad to be of help! I feel like I'm learning a lot.
What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text)
...
(3) Is there a speaker-independent normative reality?
Focussing on epistemic facts, the AI could not make that argument. I assumed that you had the AI lack the concept of epistemic reasons because you agreed with me that there is no possible argument out of using this concept, if you start out with the concept, not because you just felt that it would have been too much of a detour to have the AI explain why it finds the concept incoherent.
I think I agree with all of this, but I'm not sure, because we seem to draw different conclusions. In any case, I'm now convinced I should have written the AI's dialogue a bit differently. You're right that the AI shouldn't just state that it has no concept of irreducible normative facts. It should provide an argument as well!
How would this analogous argument go? I'll take the AI's key point and reword it to be speaking about epistemic facts instead of moral facts
AI: To motivate the use of irreducibly normative concepts, philosophers often point to instances of universal agreement on epistemic propositions. Sammy Martin uses the example “we always have a reason to believe that 2+2=4.” Your intuition suggests that all epistemic propositions work the same way. Therefore, you might conclude that even for propositions philosophers disagree over, there exists a solution that’s “just as right” as “we always have a reason to believe that 2+2=4” is right. However, you haven’t established that all epistemic statements work the same way—that was just an intuition. “we always have a reason to believe that 2+2=4” describes something that people are automatically disposed to believe. It expresses something that normally-disposed people come to endorse by their own lights. That makes it a true fact of some kind, but it’s not necessarily an “objective” or “speaker-independent” fact. If you want to show beyond doubt that there are epistemic facts that don’t depend on the attitudes held by the speakers—i.e., epistemic facts beyond what people themselves will judge to be what you should believe —you’d need to deliver a stronger example. But then you run into the following dilemma: If you pick a self-evident epistemic proposition, you face the critique that the “epistemic facts” that you claim exist are merely examples of a subjectivist epistemology. By contrast, if you pick an example proposition that philosophers can reasonably disagree over, you face the critique that you haven’t established what it could mean for one party to be right. If one person claims we have reason to believe that alien life exists, and another person denies this, how would we tell who’s right? What is the question that these two parties disagree on? Thus far, I have no coherent account of what it could mean for an epistemic theory to be right in the elusive, objectivist sense that Martin and other normative realists hold in mind.
Bob: I think I followed that. You mentioned the example of uncontroversial epistemic propositions, and you seemed somewhat dismissive about their relevance? I always thought those were pretty interesting. Couldn’t I hold the view that true epistemic statements are always self-evident? Maybe not because self-evidence is what makes them true, but because, as rational beings, we are predisposed to appreciate epistemic facts?
AI: Such an account would render epistemology very narrow. Incredibly few epistemic propositions appear self-evident to all humans. The same goes for whatever subset of “well-informed” or “philosophically sophisticated” humans you may want to construct.
It doesn't work, does it? The reason it doesn't work is that the scenario in which the AI is written where it 'concluded' that 'incredibly few epistemic propositions appear self-evident to all humans' is unimaginable. What would it mean for this to be true, what would the world have to be like?
I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven't yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners.
I do not believe it is logically impossible that expert reasoners could diverge on all epistemic facts, but I do think that it is in some fairly deep sense impossible. For there to be such a divergence, reality itself would have to be unknowable.
The 'speaker-independent normative reality' that epistemic facts refer to is just actual objective reality - of all the potential epistemic facts out there, the one that actually corresponds to reality is the one that 'sticks out' in exactly the way that a speaker-independent normative reality should.
This means that there is no possible world where anyone with the concept of epistemic facts gets convinced, probabilistically, because they fail to see any epistemic convergence, that there are no epistemic facts. There would never be such a lack of convergence.
So my initial point,
The AI is in the latter camp, but not because of evidence, the way that it's a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it's constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren't comprehensible to it, it only has access to argument 1), which doesn't work. It can't imagine 2).
still stands - that the AI is a normative anti-realist because it doesn't have the concept of a normative reason, not because it has the concept and has decided that it probably doesn't apply (and there was no alternative way for you to write the AI reaching that conclusion).
The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don't see alternatives to my suggestions (1), (2) and (3).
So I take option (3), where the 'extremely strong convergence' on claims about epistemic facts about what we should believe implies with virtual certainty that there is a speaker-independent normative reality, because the reality-corresponding collection of epistemic claims, in fact, stick out compared to all the other possible epistemic facts.
So, maybe the 'normativity argument' as I called it is really just another convergence argument, but just a convergence argument that is of infinite or near-infinite strength, because the convergence among our beliefs about what is epistemically justified is so strong that it's effectively unimaginable that they couldn't converge.
If you wish to deny that epistemic facts are needed to explain the convergence, I think that you end up in quite a strong form of pragmatism about truth, and give up on the notion of knowing anything about mind-independent objective reality, Kant-style, for reasons that I discuss here. That's quite a bullet to bite. You don't expect much convergence on epistemic facts, so maybe you are already a pragmatist about truth?
"Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn't be too surprised if morality works similarly."
And I kind of agree with that, but I don't know how much convergence I would expect in epistemology. (I think it's plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)
Lastly,
My confidence that convergence won't work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes "legitimate" because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology).
I'll have to wait for your more specific arguments on this topic! I did give some preliminary discussion here of why, for example, I think that you're dragged towards a total-utilitarian view whether you like it or not. It's also important to note that the convergence arguments aren't (principally) about people, but about possible normative theories - people might refuse to accept the implications of their own beliefs.
jacobpfau @ 2020-06-12T20:44 (+11)
I enjoyed reading this post! I like Wittgensteinian arguments, and applying them to ethics, so hurrah for this. There was also some lively discussion of it on the EA corner chat.
Another possible misleading motivation for irreducible normativity may be linguistic. It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.
From an EA perspective, I thought it could be useful to get a sense of the effectiveness of this post (series)? You could, for instance, identify a few philosophy graduate students who hold the position you're arguing against and compare their credence in the relevant position before and after reading. In my experience, people's cruxes for disagreement in ethics are all over the place, and you run a risk of missing the arguments which compel those who believe in e.g. irreducible normativity. I very much like Wittgensteinian arguments against motivations and coherence, but for those who subscribe to irreducible normativity I'm not sure they will find these arguments compelling. If this concern actualizes, you might find it useful to first poll people who disagree with you about the position of interest, and then write a post to address the cruxes you have identified.
Edit: At the moment EA forum spam filter is, for some reason, preventing me from replying to @antimonyanthony, so I will reply by edit instead: I think this is quite a subtle point, and as I understand it, there is some ongoing disagreement among philosophers about these issues. Let's make things clearer by replacing 'agony' with 'bad experience'. A bad experience for a paperclip maximizer is likely to involve difficulty producing paperclips. More generally, which experiences are considered bad is determined by the agent's nature. However for humans there's sufficient overlap in our neural nature for their to be self-evident cases of badness, e.g. extreme pain. If someone does not call these self-evident cases bad, then she/he is not using the word bad in its standard sense. There are a lot of complications in this argument cf. Kripke on c-fibers, but I believe the general argument I sketched holds.
antimonyanthony @ 2020-06-12T21:47 (+1)
It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.
Could you please clarify this? As someone who is mainly convinced of irreducible normativity by the self-evident badness of agony - in particular, considering the intuition that someone in agony has reason to end it even if they don't consciously "desire" that end - I don't think this can be dissolved as a linguistic confusion.
It's true that for all practical purposes humans seem not to desire their own pain/suffering. But in my discussions with some antirealists they have argued that if a paperclip maximizer, for example, doesn't want not to suffer (by hypothesis all it wants is to maximize paperclips), then such a being doesn't have a reason to avoid suffering. That to me seems patently unbelievable. Apologies if I've misunderstood your point!
MichaelA @ 2020-06-16T07:42 (+8)
Admittedly, this somewhat merges the evolutionary debunking argument with the argument from widespread moral disagreement. In future posts, I will argue that we don’t need to envision aliens. Even among humans, we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—maximally nuanced and versatile.)
It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.
I suspect this is a somewhat minor point, and that differences in moral views between humans who are quite smart and have reflected quite a bit are still sufficient to support certain important arguments. But if an argument was premised on the claim "we can observe unbridgeable disagreements between reasoners whose thinking is—as far as we can tell—maximally nuanced and versatile", I think I'd be quite skeptical of that argument, at least until it's shown that the argument holds given only a weaker version of that claim.
One example of why: I don't think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn't converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.
(Or maybe I misunderstood what you meant. And probably you'll go into more detail in those future posts.)
Lukas_Gloor @ 2020-06-16T13:22 (+4)
It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.
Sorry, bad phrasing on my part! I didn't mean to suggest that there are perfect human reasoners. :)
The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don't even change their minds in the light of new evidence, or don't use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly "more nuanced and versatile."
And so I'm replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can't find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of "humans who reason in really great ways," that reference class will still contain unbridgeable disagreement.
One example of why: I don't think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn't converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.
I haven't yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don't necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren't always answering the same question and/or paying attention to the same evaluation criteria.
MichaelA @ 2020-06-17T01:48 (+6)
Ok, that helps me see what you meant.
I still perhaps feel unsure what you mean by "unbridgeable disagreement", and how we'd know that disagreements we observe are indeed unbridgeable rather than things that might go away given more idealisation or reflection or the like. (I'm also not saying I'm confident the disagreements we observe will go away with further idealisation etc.) But maybe future posts will address that.
And in relation to your last sentence, a quick thought in that perhaps, given more idealisation or reflection or the like, people would switch to answering the same questions, paying attention to the same evaluation criteria, etc. (But again, maybe future posts will address that.)
And yes, I didn't mean to imply you had made arguments directly about coherent extrapolated volition yet - I just highlighted that as one reason why the lack of maximally nuanced and versatile reasons to date seems potentially important.
SammyDMartin @ 2020-07-29T10:25 (+6)
How to make anti-realism existentially satisfying
Instead of “utilitarianism as the One True Theory,” we consider it as “utilitarianism as a personal, morally-inspired life goal...
”While this concession is undoubtedly frustrating, proclaiming others to be objectively wrong rarely accomplished anything anyway. It’s not as though moral disagreements—or disagreements in people’s life choices—would go away if we adopted moral realism.
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a 'personal life goal' makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.
Speaking as someone inclined towards moral realism, the most inspiring presentations I've ever seen of anti-realism are those given by Peter Singer in The Expanding Circle and Eliezer Yudkowsky in his metaethics sequence. Probably not by coincidence - both of these people are inclined to be realists. Eliezer said as much, and Singer later became a realist after reading Parfit. Eliezer Yudkowsky on 'The Meaning of Right':
The apparent objectivity of morality has just been explained—and not explained away. For indeed, if someone slipped me a pill that made me want to kill people, nonetheless, it would not be right to kill people. Perhaps I would actually kill people, in that situation—but that is because something other than morality would be controlling my actions.
Morality is not just subjunctively objective, but subjectively objective. I experience it as something I cannot change. Even after I know that it's myself who computes this 1-place function, and not a rock somewhere—even after I know that I will not find any star or mountain that computes this function, that only upon me is it written—even so, I find that I wish to save lives, and that even if I could change this by an act of will, I would not choose to do so. I do not wish to reject joy, or beauty, or freedom. What else would I do instead? I do not wish to reject the Gift that natural selection accidentally barfed into me.
And Singer in the Expanding Circle:
“Whether particular people with the capacity to take an objective point of view actually do take this objective viewpoint into account when they act will depend on the strength of their desire to avoid inconsistency between the way they reason publicly and the way they act.”
These are both anti-realist claims. They define 'right' descriptively and procedurally as arising from what we would want to do under some ideal circumstances, and rigidifies on the output of that idealization, not on what we want. To a realist, this is far more appealing than a mere "personal, morally-inspired life goal", and has the character of 'external moral constraint', even if it's not really ultimately external, but just the result of immovable or basic facts about how your mind will, in fact work, including facts about how your mind finds inconsistencies in its own beliefs. This is a feature, not a bug:
According to utilitarianism, what people ought to spend their time on depends not on what they care about but also on how they can use their abilities to do the most good. What people most want to do only factors into the equation in the form of motivational constraints, constraints about which self-concepts or ambitious career paths would be long-term sustainable. Williams argues that this utilitarian thought process alienates people from their actions since it makes it no longer the case that actions flow from the projects and attitudes with which these people most strongly identify...
The exact thing that Williams calls 'alienating' is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this 'alienation' if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you'd reframe epistemic or practical reasoning on the anti-realist view. Then it seems more 'external' and less relativistic.
One thing this framing makes clearer, which you don't deny but don't mention, is that anti-realism does not imply relativism.
In that case, normative discussions can remain fruitful. Unfortunately, this won’t work in all instances. There will be cases where no matter how outrageous we find someone’s choices, we cannot say that they are committing an error of reasoning.
What we can say, on anti-realism as characterised by Singer and Yudkowsky, is that they are making an error of morality. We are not obligated (how could we be?) towards relativism, permissiveness or accepting values incompatible with our own on anti-realism. Ultimately, you can just say that 'I am right and you are wrong'.
That's one of the major upsides of anti-realism to the realist - you still get to make universal, prescriptive claims and follow them through, and follow them through because they are morally right, and if people disagree with you then they are morally wrong and you aren't obligated to listen to their arguments if they arise from fundamentally incompatible values. Put that way, anti-realism is much more appealing to someone with realist inclinations.
Lukas_Gloor @ 2020-08-06T10:07 (+4)
The exact thing that Williams calls 'alienating' is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this 'alienation' if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you'd reframe epistemic or practical reasoning on the anti-realist view. Then it seems more 'external' and less relativistic.
Nice point!
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a 'personal life goal' makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.
Yeah, I think that's a good suggestion. I had a point about "arguments can't be unseen" – which seems somewhat related to the alienation point.
I didn't quite want to imply that morality is just a life goal. There's a sense in which morality is "out there" – it's just more underdetermined than the realists think, and maybe more goes into whether or not to feel compelled to dedicate all of one's life to other-regarding concerns.
I emphasize this notion of "life goals" because it will play a central role later on in this sequence. I think it's central to all of normativity. Back when I was a moral realist, I used to say "ethics is about goals" and "everything is ethics." There's this position "normative monism" that says all of normativity is the same thing. I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one's psychology one identifies with.)
SammyDMartin @ 2020-08-27T17:11 (+8)
I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one's psychology one identifies with.)
I think that you were referring to this?
Normative realism implies identification with system 2
...
I find this very interesting because locating personal identity in system 1 feels conceptually impossible or deeply confusing. No matter how much rationalization goes on, it never seems intuitive to identify myself with system 1. How can you identify with the part of yourself that isn't doing the explicit thinking, including the decision about which part of yourself to identify with? It reminds me of Nagel's The Last Word.
My point here was that if you are a realist about normativity of any kind, you have to identify with system 2 as that is what makes the (potentially correct) judgements about what you ought to do.
But that's not to say that if you are antirealist, you have to identify with system 1. If you are an antirealist, then in some sense (the realist sense) you don't have to identify with anything, but how easy and natural it is to identify with system 2 depends on how much importance you place on coherence among your values, which in turn depends on how coherent and universalizable your values actually are - you can be an antirealist but accept that some fairly strong degree of convergence does occur in practice, for whatever reason. This:
target criteria can differ between people, and are often underdetermined
seems to imply that you don't think there will be much convergence practically, or that we should feel a strong pressure to reach high-level agreement on moral questions because such a project is never going to succeed.
I think this is part of the motivation for your 'case for suffering focussed ethics' - even though any asymmetry between preventing suffering and producing happiness falls victim to the absurd conclusion and paralysis argument, I'm assuming that this wouldn't bother you much.
I talk about why, regardless of whether realism is true, I think this is an unstable position in that post.
seanrson @ 2020-09-13T06:35 (+3)
AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:
"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit... And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]".
Also, I'm not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that "the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled", is by appealing to the existence of impossibility theorems in ethics. In that case we truly won't be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn't surprise us too much if we agree with the evolved nature of some of our moral intuitions.
MichaelA @ 2020-06-16T07:35 (+4)
Thanks for this post.
Could you explain what you mean by "open-ended normative uncertainty" and/or "open-ended notions of moral uncertainty", as distinct from the more general concepts of normative/moral uncertainty?
Footnote 26 leaves me with the impression that perhaps you mean something like "uncertainty about what our fundamental goals should be, rather than uncertainty that's just about what should follow from our fundamental goals". But I'm not sure I'd call the latter type of uncertainty normative/moral uncertainty at all - it seems more like logical or empirical uncertainty.
(Or feel free to let this be answered by your future post on "moral reflection from an anti-realist perspective".)
Lukas_Gloor @ 2020-06-16T13:07 (+4)
Good question!
By "open-ended moral uncertainty" I mean being uncertain about one's values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.
Footnote 26 leaves me with the impression that perhaps you mean something like "uncertainty about what our fundamental goals should be, rather than uncertainty that's just about what should follow from our fundamental goals". But I'm not sure I'd call the latter type of uncertainty normative/moral uncertainty at all - it seems more like logical or empirical uncertainty.
Yes, this captures it well. I'd say most of the usage of "moral uncertainty" in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I'm describing isn't "normative uncertainty" at all. I think many effective altruists use "moral uncertainty" in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn't necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it's not.)
Now, I didn't necessarily mean to suggest that the only defensible way to think that morality has enough "structure" to deserve the label "moral realism" is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don't know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: "Why do you think the question you're asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?"
To be clear, I'm not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I'm just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we're asking, in this case, isn't "What's the true moral theory?" but "Which moral theory would I come to endorse if I thought about this question more?"
MichaelA @ 2020-06-17T01:58 (+4)
This is an interesting perspective. I have indeed noticed for a while that my moral uncertainty has the very weird feature that I'm not even sure what shape or type of solution I'm after, or what criteria I'd evaluate it against. And this seems to mesh well with your comments about this seeming to be ill-defined, and a matter where people don't even know what they're uncertain about.
Thus far, I've basically responded to that issue with the thought that: "I'm extremely confused about lots of things, including things that I have reason to believe really do correspond to reality like quantum mechanics or the 'beginning' or 'ending' of the universe. So even if I'm extremely confused about this, maybe there's still something real going on there that I'm uncertain about, rather than there just being nothing [in the sense of speaker-independent normativity] going on there." (I"m aware that anti-realism doesn't mean "there's no normativity at all going on here".)
But I definitely think that the case for believing in things like quantum mechanics despite not understanding them is much stronger than the case for believing in things like speaker-independent normativity despite not understanding it.
Also, just in case this wasn't clear, by those sentences of mine that you quoted, I meant that I'm not sure I'd call "uncertainty that's just about what should follow from our fundamental goals" normative/moral uncertainty, rather than logical or empirical uncertainty. I would call "uncertainty about what our fundamental goals should be" normative/moral uncertainty. (And then that's subject to your criticisms.)