What Is Moral Realism?

By Lukas_Gloor @ 2018-05-22T15:49 (+72)

Last updated: 20/1/2022.

This is the first post in my sequence on moral anti-realism.

Introduction

To start off this sequence, I want to give a short description of moral realism; I’ll be arguing against moral realism in later posts, and I want to clearly explain what it is I’m arguing against.

When I’m arguing against moral realism, I will deliberately set aside some moral realist views and focus on those forms of moral realism that I find most relevant – in the sense that the “relevant” versions, if correct, would be the most relevant to effective altruism and to people’s lives in general. I will call these versions moral realism worthy of the name. Thus, I don’t claim that all versions of moral realism discussed in the academic literature are mistaken.

The goal of this introductory post is threefold:

  1. to give a quick overview of metaethics[1] and different versions of moral realism
  2. to explain why I find many of these versions of moral realism only modestly relevant to ethical practice
  3. to outline what I take to be moral realism worthy of the name

Overview and summary

Two definitions of moral realism

Sidenote: Subjectivism and intersubjectivism

Objectivist moral realism

Moral realism worthy of the name: Two proposals

Two definitions of moral realism

Moral realism has been defined in different ways by different authors. I will start by discussing two different definitions, both of which are broad in that they allow for many different positions to count as ‘moral realism.’

1. The semantic definition

The first definition is from Geoffrey Sayre-McCord’s Essays on Moral Realism (1988, p. 5). It is meant to serve as a definition for realism, not just about morality, but about any domain of claims under scrutiny.

Wherever it is found, [...] realism involves embracing just two theses: (i) the claims in question, when literally construed, are literally true or false (cognitivism), and (ii) some are literally true. Nothing more.

Sayre-McCord’s definition illustrates a confusing feature of the debate between moral realists and moral anti-realists: the discussion can happen simultaneously on two levels. At the first level (I will call it the linguistic level), people disagree about the nature of first-order moral claims – about what competent speakers mean when they say things such as “Murder is wrong.” On the second, substantive level, people disagree about whether some moral claims (properly interpreted) are true, or whether all first-order moral claims are false. (Second-order moral claims, such as “all first-order moral claims are false,” can still be true even if all first-order moral claims are false.)

We can now see that different positions on the linguistic level lead to different types of moral realism. Consider two moral realists. One thinks that moral claims such as “X is wrong” just mean the same thing as, e.g., “X reduces net happiness,” and she thinks that some of these claims are true. Another thinks that moral claims such as “X is wrong” refer to an irreducible property of wrongness, and he thinks that some of these claims are true. While these are both forms of moral realism, they are quite different.

Merely semantic versions of moral realism

The semantic definition allows for the situation that whether one endorses realismcan depend solely on one’s views about language rather than one’s views about morality. Specifically, some versions of moral realism are grounded in idiosyncratic or minimalist accounts of what it means for a claim to be true (see pragmatism[2] or minimalism about talk of truth[3]). I won’t address these views further both because they are rarely explicitly defended and because any moral realism endorsed on merely semantic grounds is going to be inconsequential: whether we consider it true or not only has consequences for how we would speak (i.e., whether to call something moral realism or not), not for how we would act.

Non-cognitivism

Some moral anti-realists hold that moral claims are not best interpreted as claims that can conceivably be true or false (that is, are not truth-apt). This position is called non-cognitivism (as opposed to cognitivism); it is the most radical form of anti-realism.

One non-cognitivist view is expressivism, which holds that moral claims are best interpreted as expressions of an evaluative attitude (veiled expressions of, e.g., the speaker’s approval or disapproval). According to the expressivist view, when someone says “Murder is wrong,” the best interpretation of that statement is not something that can literally be true or false. Rather, the statement employs similar language to that used in truth-apt statements to express disapproval. “Murder is wrong,” that is to say, is just a non-cognitive expression of disapproval of murder; it can’t be true or false any more than it can be true or false to say “Ouch!” when you hit yourself with a hammer. “Murder is wrong” looks like a claim, but the appearance is deceptive.

Non-cognitivists have a great deal of explaining to do. They need to account for why moral discourse has the appearance of asserting truths, not just when we make emotionally loaded statements like “What you’re doing is just wrong!”, but also in the context of carefully reasoned philosophical discussions. We readily use logic to analyze moral claims, or we talk about moral ‘beliefs’ and moral ‘knowledge’, and we have devoted an entire branch of philosophy (normative and applied ethics) to figuring out which moral claims are true.

Linguistic-level disagreements are inconsequential

Furthermore, it seems strange to deny that at least some people’s moral statements are truth-apt. After all, some people are ardent moral realists who make moral claims themselves, and at least some of them will tell us explicitly that they give zero credence to the non-cognitivist interpretation of their moral claims. Whether their intended usage of moral claims is the same as typical usage feels like a secondary question to me. Indeed, I find it surprising that such a substantial portion of metaethics is centered around conceptual analysis on the linguistic level,[4] debating what people might mean when they say things like “X is wrong.” Given that moral realists exist and given that they believe that their interpretations of moral discourse are intelligible and important, it seems like we should be able to address their claims on their merits (or lack thereof), regardless of what else moral discourse may sometimes be about.[5]

This makes debates about what moral claims mean in everyday language less relevant to my current project. An analogy: Suppose that I’m discussing theology with some philosophers. The philosophers are trying to interpret the Bible, making claims like, “It's not true within the Biblical storyline that Noah was a woman but true that he was a man,” or, “Religious claims aren’t truth-apt because the authors of those texts were telling parables to express thoughts on the human condition; they didn’t intend to say things that can be true or false in a literal sense.” And all I'm thinking is, “Why are we so focused on interpreting religious claims? Isn’t the major question here whether there are things such as God, or life after death!?” The question that is of utmost relevance to our lives is whether religion’s metaphysical claims, interpreted in a straightforward and realist fashion, are true or not. An analysis of other claims can come later.

This is also how I look at the literature on moral realism: Some versions of moral realism would be rather inconsequential if true, while others would be vastly relevant to our lives. Whether the latter stem from a typical interpretation of moral discourse or not is not that important (although if no one took strong realism about morality seriously, that may indicate obvious flaws in the moral realism hypothesis). So given that some versions of moral realism would be vastly more relevant to people’s lives than others, I want to primarily focus on assessing whether strong forms of moral realism are true. Correspondingly, this means I won’t be satisfied with non-cognitivist accounts that brush aside the possibility that strong moral realism is intelligible and evaluable on its own merits, the meaning of ordinary moral discourse aside. (See Kahane, 2013, who argues that moral realism can be intelligible and defensible even if it doesn’t reflect ordinary moral discourse).

2. The ontological definition

For the purposes of the present essay, I’ll provide a definition of moral realism that works independently of the debate about the proper linguistic interpretation of moral claims. Stephen Finlay’s (2007) definition of what he calls ‘ontological moral realism’ is just such a definition.

[Ontological moral realism is the claim that] moral claims describe and are made true by some moral facts involving moral entities (e.g., reasons, obligations), relations (e.g., justification), or properties (e.g., goodness, rightness, virtue). [...] This form of realism takes as its objects the truth-makers of moral claims, holding that they include moral properties such as value (e.g., the goodness of charity) and moral entities such as practical reasons and obligations (e.g., reasons not to tell lies, obligations to keep promises).

Ontological moral realism is correct if moral claims are sometimes true in virtue of correctly referring to a moral reality consisting of the “truth-makers” for moral claims, the entities that make those claims true: moral entities, relations, properties, etc. These entities, further, must exist independently of anyone’s beliefs. What this moral reality would consist of is explicitly left open. Ontological realism is therefore compatible with views according to which moral facts, such as facts about value or disvalue, can be identified with natural facts (e.g. facts about pleasure or suffering), and with non-naturalist views where the moral ‘reality’ consists of something more abstract (e.g. facts about reasons for action postulated by reasons externalism).[6] (If the difference between naturalism and non-naturalism seems too abstract now, I’ll address it further later.)

Different degrees of ‘objective’

We can further distinguish three, increasingly ‘objective’, conceptions of moral reality. All versions of ontological moral realism have in common that they are objective in the sense of being about the existence of moral facts: Facts do not change depending on who we ask or how we look at something. Then there is a second sense of ‘objective’ that refers to whether those facts are about something that (in part) depends on one’s personal desires/goals/preferences, or whether the facts remain the same even if one’s own desires/goals/preferences are changed.

Subjectivism is the view that moral claims are made true or false with respect to facts about one’s own (i.e., subjective) desires/preferences/goals about the world.

Intersubjectivism is the view that moral claims are made true or false with respect to facts about both one’s own desires/goals/preferences and those of other people. For instance, according to some intersubjectivist views, morality is about rational actors pursuing their own ends while respecting an envisioned social contract.

Objectivism (not to be confused with Ayn Rand’s Objectivism, which describes a subjectivist morality based on self-interest) is the view that morality is the same for everyone and independent from one’s personal desires/preferences/goals. (Or that one’s own desires at most count as “one out of thousands” e.g. in preference utilitarianism.)

Arguably, it is only objectivism that captures the ways in which (at least some people’s) moral intuitions make morality out to be something all-encompassing that every person is bound to.

Sidenote: Subjectivism and intersubjectivism

Nevertheless, there are subjectivist and intersubjectivist views that would be relevant to our lives if they were true. In this section, I will give two examples of views, one subjectivist and one intersubjectivist, that seem to me like correct and practically relevant ways of thinking about issues in moral philosophy, even though they would plausibly count as “not realist” according to at least the semantic definition of moral realism.

Subjectivism

Subjectivism holds that moral value is determined by an agent’s personal desires. According to the subjectivist Michael Smith, for instance, what is good for an agent is what the agent would desire if they had perfect information and were perfectly rational. I’m including the following quote from Finlay (2007) because the position sounds like some of the views that have been discussed prominently on LessWrong:

[...] Smith bases each person’s normative requirements on his or her own desires, subject only to rational enhancement (full information and coherence). Moral claims can be true, he maintains, provided that all rational persons would converge on a common set of desires with a distinctly moral content (Moral Problem 173, 187–9). Richard Joyce, who largely accepts Smith’s subjectivist approach as an account of normativity, reasonably objects that this claim on behalf of morality is implausible. Rational selves’ desires are reached by correction from actual selves’ desires, and these starting points are too diverse to support the required kind of convergence (89–94).

As a claim about about how moral discourse is to be interpreted, subjectivism holds that “X is good” should be treated as shorthand for “X is good according to my desires” (Sayre-McCord, 1988, p. 18). This seems like a somewhat implausible interpretation of moral discourse to me – but people’s intuitions about what morality is about may differ. Under the assumption that moral discourse is usually objectivist, while objectivist moral realism is false, perhaps one could resort to subjectivist moral discourse as a way to salvage something useful from the debris. In that case, subjectivism would rather count as (a constructive proposal within) anti-realism rather than a version of realism.

In any case, I have a lot of thoughts on the merits of subjectivist accounts that specifically refer to what we would come to value after moral reflection under ideal conditions, but will reserve them for later parts of this sequence where I explore options within what I think of as moral anti-realism. So for our purposes here with respect to my upcoming posts on why I’m not a moral realist, I’ll treat subjectivism as one version of anti-realism. Not because this is the obvious way of categorizing it, but because I want to reserve the moral realism label for only the most consequential versions of moral realism.

Intersubjectivism

Another position located somewhere near the boundary between realist and anti-realist views is constructivism as a metaethical view,[7] which holds that morality is about what rational actors would hypothetically agree to (under certain idealized conditions) with respect to how everyone, themselves included, should act. Different versions of constructivism give different accounts of how to think about this hypothetical agreement: Some are based on considerations about social contracts, others on Kantian universalization of one’s decision maxims (“acting as though one expects all other rational people to choose the same decision procedure”). A specification of the conditions under which hypothetical agreement is to be derived is called a constructive function (as explained by Shafer-Landau, 2003, pos. 201).

Metaethical constructivism is an intersubjectivist position. It is not objectivist because constructive functions merely constrain what follows from people’s desires, preferences, or goals – they do not introduce anything that we ought to do (or is morally good or bad) that goes beyond those desires.

While I am unconvinced by constructivism as a metaethical position (because that would commit us to the claim that moral discourse is necessarily all about hypothetical contracts rather than also e.g. unconditional altruism or care), I am sympathetic to constructivism being important on pragmatic or prudential grounds. Is there a single, uniquely compelling way to choose a constructive function? I think that the answer is not obviously no. I find it noteworthy that central aspects of (mostly Kantian)[8] constructivism are mirrored in LessWrong-inspired discussions about the implications of non-causal decision theories. Perhaps these considerations could be thought of as plausible extensions of the concept “rationality as systematized winning,” such that, by getting their implications right, one could increase one’s all-things-considered degree of goal achievement. In any case, whether we want to call this moral realism or not, it is worth flagging constructivism as a moral view according to which morality is potentially action-relevant in a surprisingly non-trivial and yet rationally binding way.

One reason to perhaps not think of constructivism as moral realism is precisely because it seems to be more of an extension of what it means to be rational rather than what it means to be moral. At least according to some connotations of the word ‘moral,’ it is tied not only to notions of fairness or rational cooperation, but also to considerations of care or altruism. And while Kantianism or non-causal decision theories may imply that one should care substantially about the desires of other rational agents (in a perhaps power-weighted fashion), they do not imply anything about the content of one’s own desires, including whether to care about the well-being of sentient beings that are not (or insufficiently) rational.[9]

Objectivist moral realism

If someone talks about moral realism without further elaboration, this person is probably[10] talking about what I here call objectivist moral realism: The view that there are speaker-independent moral facts (ontological moral realism) that hold for each person independently of their personal desires/preferences/goals (objectivism). I also think that objectivism makes for the most straightforward linguistic interpretation of moral claims (although as I argued above, this should in itself not be our main criterion for selecting which positions to pay most attention to).

Error theory: objectivism’s anti-realist counterpart

Error theory is the moral anti-realist counterpart to objectivist moral realism. (In theory it seems conceivable to me that one can be an error theorist believing that moral discourse is subjectivist or intersubjectivist in nature and is all false, but that would be unusual.) Unlike non-cognitivists, error theorists agree with realists that moral claims are best interpreted as saying things that can be true or false: as being truth-apt. Nevertheless, they deny the realist claim that first-order moral claims can be true. That is to say, error theorists hold that all first-order moral claims are, and must be, false.

What brand of objectivism: Are moral facts natural or non-natural facts?

There are broadly two types of objectivist realist positions: moral naturalism and moral non-naturalism. They differ with respect to what they take to be the nature of moral facts. In the same way it is true that a bachelor is an unmarried man, moral naturalists believe that that moral terms such as ‘morally good’ just refer to some natural property or natural properties, such as for instance pleasure or desire fulfillment. By contrast, moral non-naturalists think we will never be able to identify moral terms with natural properties, i.e., they believe that moral terms are basic and mean more than what can be expressed in non-moral language. Moral non-naturalists believe that, at best, we could only discover how non-naturalist moral facts map onto (or ‘supervene on’) natural facts.[11] For instance, we could discover that situations where someone needlessly harms others always involve moral wrongness, but wrongness, thusly interpreted, is not synonymous with needlessly harming others.[12]

How to distinguish between naturalist and non-naturalist positions can be a subject of extensive debate.[13] But perhaps the most salient difference between naturalism and non-naturalism is that the two positions tend to be susceptible to different types of challenges. In general, moral realism is backed by the ‘moral appearances’ (Finlay, 2007) – our realist intuitions about morality – and challenged by external pressures about how to reconcile realism about morality with what we know about the world and about the mind. Moral naturalism solves this challenge by making concessions (so one could argue) that weaken moral appearances, while moral non-naturalism stays maximally close to these moral appearances.

But staying so close to the moral appearances creates difficulties in reconciling non-naturalism with the rest of what we know about the world. These difficulties were famously summarized by John Mackie in his Argument from Queerness. Queerness is the charge that non-naturalist moral facts, should they exist, would be so different from all the other things we are used to in our conceptual repertoire that we had better think twice about incorporating them at all. The moral facts would be “queer” because, depending on the (usually non-naturalist) moral realist account in question, they may be causally redundant or impotent, be epistemically inaccessible, or have an (allegedly) mysterious connection to human motivation (see reasons externalism).

By contrast, moral naturalism (at least in most versions)[14] evades these accommodation charges because naturalists believe that moral facts are simply natural facts, and that we could express moral claims in non-moral terminology without necessarily altering the meaning. However, moral naturalists are faced with a different conundrum, summarized in G.E. Moore’s Open Question Argument: Simply arguing that moral and non-moral terms are synonymous (e.g. that "goodness" and "desire satisfaction" or "happiness" are synonymous) is dubious. This is because it seems perfectly coherent to ask: "Sure, this is an example of desire satisfaction, but is it good?" (Or: "Sure, this is an example of happiness, but is it good?" Etc.) For illustration, note that one cannot ask these questions coherently about pairs of terms that are truly synonymous. For example, one cannot say: "Sure, John is an unmarried man, but is he really a bachelor?" This question is not coherent because "bachelor" and "unmarried man" mean the same thing. Thus, moral naturalists have some explaining to do when they they hold that moral and non-moral terms are synonymous.[15]

Moral naturalism

There are many different versions of moral naturalism and I will focus specifically on just two versions. We will notice that moral naturalist accounts often share strong similarities with subjectivism, even though they are objectivist views. This makes sense because a main challenge for naturalists is to show that they have singled out the right natural facts in their analysis of what is morally good or bad. And one promising way of establishing that one has singled out the right natural facts is by appealing to natural facts that are already of concern to each person individually.

Example1: "Action-guiding concepts"

The first example of an approach to moral naturalism is exemplified by Aristotelian virtue ethics as endorsed by Philippa Foot (1958) and Paul Bloomfield (as described in Finlay, 2007). Here we are less interested in this position itself, and more in the methodology or approach that motivates the position. Foot and Bloomfield both appeal to biology in determining which natural facts are moral facts; they note that certain things are conducive to the attainment of our biological ends – e.g. health, well-being, survival – and others aren’t; and conclude via conceptual analysis that ‘good’ (and other moral terms) refer to things conducive to the attainment of these ends.

For instance, in Moral Beliefs (1958), Philippa Foot argues against the idea that ‘good’ is a non-naturalist concept that solely expresses some kind of positive attitude. She points out that if we did use the term ‘good’ that way, nothing would prevent a “moral eccentric” to say that a good man is someone who randomly claps his hands simply because he (the eccentric) approves when people randomly clap their hands. Instead, drawing an analogy to the concept ‘injury,’ Foot advocates for a specific, substantive understanding of the term ‘good’ informed by conceptual analysis:

[It] may seem that the only way to make a necessary connexion between 'injury' and the things that are to be avoided, is to say that it is only used in an "action-guiding sense" when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all. That such people exist, in asylums, is not to the present purpose at all; the proper use of his limbs is something a man has reason to want if he wants anything. [...]

It will be noticed that this account of the action-guiding force of 'injury' links it with reasons for acting rather than with actually doing something. Just like our concept for ‘injury’ has both speaker-independent and "action-guiding" features, Foot argues that the same holds for the cardinal virtues prudence, temperance, and courage, and perhaps also of justice. Her theory of moral discourse is that it is discourse about virtues: character traits and dispositions that are beneficial for one’s natural ends. Interestingly enough, Foot notes that according to her view, justice only constitutes a virtue if it benefits the just person. After all, she equates goodness with what is conducive to one’s natural ends, not with e.g. any notion of altruism or of benefiting others. Given that it is unclear whether justice is even a virtue on Foot’s account (as she herself points out), anyone with the intuition that moral discourse is also about considerations of justice – or simply someone who personally values justice – might then question whether Foot’s account really captures what is ‘good,’ and whether we should not rather want to seek justice for its own sake.

Example2: Subjectivism, impartially extended

For contrast, we will now consider another naturalist position: that of Peter Railton as outlined in his paper Moral Realism. While Foot’s account based on conceptual analysis of what we mean by ‘good’ may fail to be convincing for people with different intuitions, Railton instead tries to establish the meaning of ‘good’ analytically: What is good for a person is what they would desire if they had full information about all the relevant facts and moral arguments. The advantage of this approach is obvious: Such a reduction of the term ‘good’ is personally relevant to us by definition. Railton makes two claims:

  1. Desire fulfillment (assuming we are fully informed when choosing what to want) is what is non-morally good for a person.
  2. Morality is about what would be non-morally good for everyone (from an “impartial perspective”).

With regards to (1), Railton’s position resembles that of the subjectivist Michael Smith: What is good for us is the fulfilment of the desires we’d have if we were fully informed about our situation. What makes Railton’s position different from subjectivism is only that he further holds (2) that there is such a thing as objectivist morality, concerning what is non-morally good for everyone. He writes:

[M]oral resolutions are thought to be determined by criteria of choice that are non-indexical and in some sense comprehensive. This has led a number of philosophers to seek to capture the special character of moral evaluation by identifying a moral point of view that is impartial, but equally concerned with all those potentially affected.

Fixing the content of morality as something “impartial” is what allows Railton to have objectivist moral facts about what is good (for everyone) even though his conception of what is valuable for any given person only depends on their personal desires. Interestingly enough, Railton’s moral realism does not come with any rationally binding recommendations for how to act. His theory has an axiological component (axiology being the study of what is valuable), postulating objective value. But his theory has no deontic component (he does not believe in objective moral obligations).

Railton uses the slogan “rationality does go relative when it goes instrumental, but epistemology need not follow.“ In other words: While there is no direct reason to act morally for any one individual because rationality – procedurally interpreted[16] – only concerns itself with drawing proper inferences from one’s pre-existing desires, there are nevertheless facts about what would make society good or bad from a perspective of maximizing desire fulfillment for all individuals. Whether to act morally is optional and up to one’s own desires, but reasoning about morality, on a purely epistemic level, can be done with an objective foundation.

Moral naturalism is often vague. Is there a single correct notion of what is good for someone?

The above may feel like a disappointing conclusion from a paper titled Moral Realism. Railton himself notes that people might object that his view “may not make morality serious enough.” Having said that, personally I am happy to call positions that postulate only an axiology (and no moral obligations) moral realism – provided that there really is one uniquely correct, compelling axiology, as opposed to many different ways of determining what is “good for everyone” depending on different specifications of what constitutes (moral or non-moral) goodness.

The problem I see with Railton’s position specifically is that it is in several respects under-defined. For instance, it could turn out to be very difficult to formalize a uniquely compelling notion of “desires” or “desires given full information” that captures all our intuitions about when desire satisfaction is or is not valuable. Furthermore, Railton’s account cannot easily be extended to taking a stance about population ethics and does not specify a precise notion of what it means to take an “impartial perspective."Railton considers this to be an advantage:

By itself, the equation of moral rightness with rationality from a social point of view is not terribly restrictive, for, depending upon what one takes rationality to be, this equation could be made by a utilitarian, a Kantian, or even a non-cognitivist. That is as it should be, for if it is to capture what is distinctive about moral norms, it should be compatible with the broadest possible range of recognized moral theories.

However, not committing to any specific perspective calls into question whether there even is, in theory, a correct answer. If there are many different and roughly equally plausible interpretations of “impartial perspective” or “desire fulfillment” (or more generally: of well-being defined as “that which is good for a person”), then the question, “Which of these different accounts is correct?” may not have an answer.

What Railton shows is that there is at least one (vague) view about what to call “good for everyone” that is plausible or defensible. And we know there are some views about this that are obviously false. This is already a lot to show, as it counters a position of extreme moral skepticism saying that there is no sense at all in which we can reason objectively about morality.

Nevertheless, Railton’s position is not quite what I would be inclined to call moral realism. It leaves too much open for interpretation because we can focus on widely different criteria when trying to systematize what doing good for others comes down to. What I am interested in is whether there is more to it: Can we show that there is a view that is not only defensible, but uniquely correct? In the last section, I will describe what conditions a Railton-like view would have to meet for me to count it as moral realism.

Moral non-naturalism

Whatever we think of non-naturalist moral realism, it is certainly ambitious. Finlay calls it the “normative face of moral realism” because it is committed to the existence of irreducibly normative moral facts. What is attractive about non-naturalism is that the other versions of moral realism appear to be somewhat watered down by contrast. Non-naturalism is arguably best able to capture the urgency attached to the sentiment that some things really are right or wrong.

Support for moral non-naturalism has been growing lately. Finlay (2007) writes:

Although long considered an absurd Platonism, [non-naturalism] today enjoys a renaissance and boasts many and distinguished champions. [...] Besides Scanlon and Shafer-Landau, contemporary philosophers who defend non-naturalism (although not all under that label) include Thomas Nagel, Derek Parfit, Jonathan Dancy, Joseph Raz, Jean Hampton, Philip Stratton-Lake, Colin McGinn, Terence Cuneo, David Enoch, Michael Huemer, and William Fitzpatrick.

The non-naturalist position about moral facts is often inspired by Moore’s Open Question Argument. Shafer-Landau, for instance, believes that “Moore was correct in thinking that we could always intelligibly question the propriety of any candidate naturalistic reduction [of moral terms]” (Shafer-Landau, 2003, pos. 738). This leaves two options: Either we accept that there is no speaker-independent normativity, or we regard it as a separate realm not reducible to physical facts.[17] Shafer-Landau and other non-naturalist philosophers have opted to go for the latter (although what this means exactly can differ from account to account, and sometimes the difference between non-naturalism and naturalism is subtle).

Next to the strong intuition that moral naturalism is inadequate to deal with the moral appearances, both Parfit and Shafer-Landau also defend moral realism by drawing analogies between morality and other domains about which (allegedly convincing) realist interpretations have been forwarded. For instance, Shafer-Landau points out “partners in crime” within the philosophy of logic/mathematics, and philosophy itself (Shafer-Landau, 2007, pos. 646):

[...] my kind of realism must seek out partners in crime. I would point to correct logical standards or physical laws (assuming a realist construal of such things), and claim that there isn't anything that makes such things true – they simply are true.

And also from the philosophy of mind (Shafer-Landau, 2007, pos. 949):

The sort of non-naturalism that I find appealing is one that bears a very close structural parallel to certain non-reductionist theories in the philosophy of mind. According to these latter views, mental properties are not identical to physical ones; mental facts are not physical facts; but mental properties are realized by instantiations of physical properties. At least in worlds relevantly close to ours, there would be no mental life without the physical stuff that constitutes it.

We can however ask whether these partners in crime really function analogously, and whether realist accounts of them are even correct. As Hallvard Lillehammer notes in his review of Shafer-Landau’s Moral Realism: A Defense, “Perhaps the most interesting thing about these alleged companions in guilt is that none of them are obviously innocent.”

Another question is how we could come to know anything about the normative realm, since it is separate from everything else (cf. the Benacerraf-Field problem in the philosophy of mathematics).

Finally, a third challenge is that, assuming for the moment that we grant the existence of non-naturalist moral facts, moral skeptics can challenge whether these facts are really action-relevant for them. In response, Shafer-Landau advocates moral rationalism, the view that “moral obligations are or entail reasons for action.” On this account, moral beliefs are on their own capable of motivating someone, but may not always be decisive for motivation.

Moral realism worthy of the name: Two proposals

I am most interested in accounts of moral realism which, should they prove to be correct, will be highly relevant to people’s lives and life projects: either directly because they are inherently compelling (they provide ‘real’ reasons to act), or because they are compelling at least for those people interested in pursuing goals motivated by altruism ("doing good for others"). With this in mind, I will now describe two different ways in which I could be convinced of moral realism.

One Compelling Axiology

Drawing from the above discussion, I would call myself a moral realist if I could be convinced that there is One Compelling Axiology in the form of a more developed and ambitious version of Railton’s naturalist position.[18] Such a view, as I envisage it, would combine a specific, complete theory about what is objectively in someone’s own interest, or is good or bad for them, with a specific, complete theory of what it means to do good for others from a kind of “impartial perspective.” (Which beings qualify as morally relevant “others” is also something the One Compelling Axiology would have to tell us; as is whether to only count people who exist currently or will exist regardless of our actions or whether to also intrinsically count the creation of new beings.)

As a loose (and untestable, at least not with current-day technology) criterion for what makes this form of realism true, I stipulate that I would count something as the One Compelling Axiology if all philosophers or philosophically-inclined reasoners, after having engaged in philosophical reflection under ideal conditions,[19] would deem the search for the One Compelling Axiology to be a sufficiently precise, non-ambiguous undertaking for them to have made up their minds rather than “rejected the question,” and if these people would all come to largely the same conclusions. If the result was near-unanimous agreement about a highly specific view, I would count this as strong moral realism being true.

Note that this proposal makes no claims about the linguistic level: I’m not saying that ordinary moral discourse let’s us define morality as convergence in people’s moral views after philosophical reflection under ideal conditions. (This would be a circular definition.) Instead, I am focusing on the aspect that such convergence would be practically relevant: If maximally well-equipped and well-informed people were all to come to the same conclusion of what it means to “do what is good for others” no matter the idiosyncrasies they started out with, then – assuming I find the prospect of doing what is good for others appealing – I have little reason to assume that my current thoughts on the matter of morality are better than the current thoughts of someone who holds intuitions I find radically counterintuitive. This would be important to know![20] I place no constraints on the possible outcomes of moral convergence, whether what is deemed good or bad for a person or a sentient being involves experiences, desire satisfaction, an objective list of things (e.g. friendship, love, exploration, etc.), or something we haven’t yet considered. The important point is that it needs to be a notion of well-being or “good for someone” that is widely compelling, not just as one defensible way of how to use the words “good for someone,” but as a compelling account of what is best for a person (or for a sentient being). A successful proposal has to give precise answers to questions such as which beings (or computations) matter morally (and how much?), or what the correct stance is on population ethics or aggregation in an infinite universe. For all these questions, the position would have to yield compelling arguments for why to take exactly one particular view as opposed to other plausible views. (This may sound overly demanding, but note that ideal conditions for philosophical reflection means having access to everything one can coherently ask for, including e.g. a well-intentioned, superintelligent oracle AI.)

The main challenge for moral realism in the form of a One Compelling Axiology is overcoming philosophical disagreement between sophisticated reasoners. There are several theories of well-being, and several notions of impartiality, that are internally coherent and highly intuitively appealing to at least some people. These theories are mutually contradictory.[21]

Irreducible normativity

The other way I could become convinced of moral realism in the sense that I mean it is if, inspired by moral non-naturalism, I became convinced that irreducible normativity is a meaningful and somehow rationally binding (on some conception of rationality I currently find strange to envision). This would roughly correspond to moral non-naturalism being true. (Some people distinguish moral reasons from prudential reasons, whereas I tend to use the adjective 'moral' in a broader sense that relates to all one’s goals, both altruistic and non-altruistic, and generally to that which matters in one’s life. Since irreducible normativity also covers egoistic goals, it is broader than narrow-sense morality.) The challenges I see for a convincing account of irreducible normativity are threefold:

  1. How to justify the existence of a realm of normative facts separate from the physical
  2. How to reliably gain epistemic access to normative facts, should they indeed exist
  3. Whether irreducible normativity is really a meaningful concept

My next post will focus specifically on irreducible normativity, where I will explain the concept in much more detail (insofar as I can manage to understand others' views on it).

Acknowledgments

Many people helped me with this post, but I want to specifically thank Simon Knutsson for important advice on earlier drafts that greatly improved the direction I went for with this post.

My work on this post was funded by the Center on Long-Term Risk.

Sources

Chalmers, D. (2011). Verbal Disputes. Philosophical Review 120(4):515-566.

Finlay, S. (2007). Four Faces of Moral Realism, Philosophy Compass 2(6):820-849.

Foot, P. (1958). Moral Beliefs. Meeting of the Aristotelian Society.

Hewitt, S. (2008). Normative Qualia and Robust Moral Realism. PhD thesis. New York University.

Kahane, G. (2013). Must Metaethical Realism Make a Semantic Claim? Journal of Moral Philosophy, 10(2):148-178.

Kant, I. (1986(1785)). Grundlegung zur Metaphysik der Sitten. Stuttgart: Reclam.

Korsgaard, C. (2012). A Kantian Case for Animal Rights. In: Animal Law – Tier und Recht: Developments and Perspectives in the 21st Century, ed. M. Michel, D. Kühne & J. Hänni. Dike: Zürich.

Lillehammer, H. (2004). Moral Realism: A Defense, Notre Dame Philosophical Reviews. https://ndpr.nd.edu/news/moral-realism-a-defense/.

Parfit, D. (2011). On What Matters, Volume II. Oxford: Oxford University Press.

Railton, P. (1986). Moral Realism. The Philosophical Review, 95(2):163-207.

Sayre-McCord, G. (ed.) (1988). Essays on Moral Realism. Ithaca: Cornell University Press.

Scanlon, T. (2012). The Appeal and Limits of Constructivism. In Constructivism in Practical Philosophy, ed. J. Lenman & Y. Shemmer, 226-242. Oxford: Oxford University Press.

Shafer-Landau, R. (2003). Moral Realism: A Defense [Kindle version]. Oxford: Oxford University Press. Retrieved from Amazon.com.

Sinhababu, N. (2010). The Epistemic Argument From Hedonism. Unpublished. (Retrieved: May 2018).

Sinhababu, N. (2018). Ethical Reductionism. Journal of Ethics and Social Philosophy, 13(1):32-52.


Endnotes

[1] A note on terminology: Some people in my online network, particularly on LessWrong, seem to use the term ‘metaethics’ somewhat differently from standard usage. That is, they use ‘metaethics’ to refer to what I would call calling ‘normative ethics’ (or perhaps the best description would be “figuring out what humans value through philosophy and cognitive science”). Within academic philosophy, metaethics is the study of moral claims: what moral claims do or don’t assert and whether these assertions are sometimes true. The questions of whether e.g. utilitarianism is true, or whether human values are complex, are less likely to come up in a discussion about metaethics. Of course, metaethics is indirectly very relevant to all these questions and informs, for instance, whether inquiries into finding the ‘right’ human values or the ‘right’ version of consequentialism are well-posed questions. And it seems plausible to me that, according to some metaethical views, “figuring out what humans value through philosophy and cognitive science” is indeed how we should be doing normative ethics. ↩︎

[2] According to pragmatism, a brand of philosophy that emphasizes the practical nature of ethics/life/everything and thereby – so one might argue – blurs the distinction between what is the case and what is practically useful, moral claims are ‘true’ not when they describe speaker-independent moral facts, rules or values, but when they result from “correct processes for solving practical problems” (Finlay, 2007). ↩︎

[3] Quoting from the SEP Moral realism entry: "Yet, with the development of (what has come to be called) minimalism about talk of truth and fact, it might seem that this characterization makes being a moral realist easier than it should be. As minimalism would have it, saying that some claim is true is just a way of (re-)asserting the claim and carries no commitment beyond that expressed by the original claim. Thus, if one is willing to claim that “murdering innocent children for fun is wrong” one can comfortably claim as well that that “murdering innocent children for fun is wrong is true” without thereby taking on any additional metaphysical baggage." ↩︎

[4] This criticism applies to many instances where philosophers do conceptual analysis. See also Luke Muehlhauser’s post on conceptual analysis and metaethics, or section 6 of this paper by David Chalmers. In short, the problems with using conceptual analysis to establish normative conclusions are threefold: Firstly, there may be no uniquely typical set of intuitions about the ‘correct’ usage of moral terminology. Secondly, ordinary usage may often be underspecified, because most people are not rigorously trained moral philosophers. Thirdly, even if the vast majority of people did use moral terminology a certain way, this would not necessarily mean that they would be using it the most useful or most ‘right’ way (provided the moral realist premise that there is a uniquely right way). As an antidote to approaches anchored in the tradition of conceptual analysis, Chalmers makes the following proposal (“X” refers to concepts such as ‘knowledge,’ ‘moral,’ and ‘science’ that are difficult to define):’ "On the picture I favor, instead of asking “What is X”, one should focus on the roles one wants X to play, and see what can play that role. The roles in question here may in principle be properties of all sorts: so one focuses on the properties one wants X to have, and figures out what has those properties. But very frequently, they will be causal roles, normative roles, and especially explanatory roles." ↩︎

[5] See also Kahane (2013) for the same point argued for at length. ↩︎

[6] It is important to note that Finlay uses the adjective ‘ontological’ in a weak sense. Derek Parfit (2011) defended a metaethical view he called Non-Ontological Cognitivism: Both mathematical talk and moral talk can be objectively true, but there are no mathematical or moral entities. Note that, according to Finlay’s typology, this would still count as ontological moral realism because Finlay’s definition liberally counts externalist reasons as (abstract) ‘moral entities.’ ↩︎

[7] Constructivism as a metaethical view is different from constructivism as a position in normative ethics. Tim Scanlon (2012), for instance, is a constructivist as regards normative ethics; however, his metaethical position is objectivist moral non-naturalism. Scanlon believes, like Parfit and Shafer-Landau, that there are irreducibly normative reasons about what people ought to do. He further believes that constructivism as an approach to normative ethics helps us determine which reasons are correct. But the question to be answered in the end is which reasons are really correct, rather than which reasons are correctly the output of a well-specified constructive function. ↩︎

[8] Especially the “kingdom of ends” formulation of Kant’s categorical imperative suggests this interpretation. What follows is my own translation from German (Kant, 1986[1785]). Note that in producing this translation, I had to make several substantial judgment calls. "The idea that every rational being is compelled to regard itself as an arbiter of universalizable norms, in order to evaluate itself and its actions from this perspective, leads us to a related and extraordinarily fruitful concept, namely that of a realm of ends. Under a realm of ends, I understand the systematic connection between rational beings through collectively shared norms. Because ends are determined by the universal validity of these norms, it follows that, if one abstracts from the personal differences between rational beings and from the content of their personal ends, then we can think up a systematically connected whole that encompasses all ends (including both the rational beings as ends in themselves and the ends that any rational being may set for itself), i.e., a realm of ends, which we can conceive according to the aforementioned principles."
To be clear, I am not saying that Kantianism is best interpreted as making claims that are related to current discussions of non-causal decision theories. I think there are several aspects of Kantianism that go against this interpretation. I am only saying that there are interesting parallels, and that, if one wants to, one could make a case for such an interpretation (or extension) of Kantianism. ↩︎

[9] Peter Carruthers, for instance, has argued on contractualist grounds against animals' having rights (Carruthers, 1992). There are, however, some constructivists who endorse animals having rights, most notably Christine Korsgaard (2012). ↩︎

[10] Alternatively, someone may have in mind an even more restrictive definition of moral realism – what is sometimes called ‘robust moral realism’ – that only refers to a subtype of objectivism: moral non-naturalism. This definition focuses on whether there are facts that are irreducibly normative. (See the subsection on moral non-naturalism, as well as posts 2 through 4 in this sequence.) ↩︎

[11] Note that some non-naturalists, such as Shafer-Landau, think that moral properties are realizable by many different ‘constellations’ of natural properties. Moral pluralism as a normative view is arguably more attractive for non-naturalists than it is for naturalists because naturalists seem to be committed to a one-to-one relationship between goodness and some other natural property (Shafer-Landau, 2003, pos. 1215). ↩︎

[12] As an analogy: Displaying a particular image on a computer screen is not synonymous with displaying a specific configuration of pixels on the computer screen, because the image – at least when viewed subjectively at a macroscopic level with human-level vision – is realizable via many different pixel configurations. The picture “supervenes” on the pixel configurations. The analogy is imperfect because an image really is nothing more than the sum of its pixels, and a picture that is slightly but percievablly different from another picture is just that, another picture. With non-naturalist morality, going from a constellation of physical facts that does not form a moral category to a constellation that does form a moral category must make for a sharp boundary somehow. However, because we cannot articulate, with reference to physical facts alone, what this sharp boundary signifies, it seems strange or “queer” to think that such a boundary even exists in a meaningful and action-relevant sense. ↩︎

[13] The SEP entry on moral non-naturalism reads: “There may be as much philosophical controversy about how to distinguish naturalism from non-naturalism as there is about which view is correct. [...] Perhaps the most vexing problem for any general characterization of non-naturalism is the bewildering array of ways in which the distinction between natural and non-natural properties has been drawn.” ↩︎

[14] An exception is Cornell Realism, a version of naturalism that holds that moral facts, although natural, cannot be reduced to other natural facts. Cornell realists claim that their position avoids the Open Question Argument that threatens other versions of naturalism. (See also endnote 16.) ↩︎

[15] One might think that the Open Question Argument leaves open the option of moral facts and natural facts merely being coextensional, i.e., that they refer to the same thing but via different routes. However, some philosophers (and myself) believe that if two concepts are coextensional in every possible world, that just means that they are synonymous. ↩︎

[16] Many moral realists reject the procedural account of rationality (which corresponds to the way ‘rationality’ is used on LessWrong: cognitive skills that help to achieve whatever goals one already endorses) in favor of what they call substantive rationality. On the substantive account, being rational may for instance entail having the right dispositions to apprehend or be motivated by externalist reasons for action. The belief in substantive rationality therefore tends to go together with reasons externalism. (And one may argue that this is circular: it defines substantive rationality with respect to externalist reasons, and externalist reasons with respect to substantive rationality.) ↩︎

[17] As a third option, we may accept that normativity is nothing over and above the physical, but that it cannot be defined in terms of the physical easily, any more than a stock market can be defined in terms of the physical. This position describes non-reductionist moral naturalism, a view that is usually associated with Cornell Realism (cf. footnote 13), but not limited to it (see, Sinhababu, 2018). However, just like I think there is nothing of relevance that depends on whether we call ourselves “realists” about the stock market or not, I fail to see how such a position would be relevant to our lives if it were true. (I suppose it would be relevant insofar as it may come with metaphysical baggage of rejecting reductionism in general, which might change how we approach philosophical questions.) ↩︎

[18] It need not be exactly like Railton’s position. Railton endorses some notion of desire fulfillment as what is good for a person. Another version of moral realism, one that I would consider to be “moral realism worthy of the name” in the One Compelling Axiology sense, is moral realism based on the idea that experiences can be intrinsically morally valuable or disvaluable. Proponents of such views (one example being  hedonism) believe that phenomenological introspection can tell us about pleasure’s (moral) goodness or pain’s (moral) badness (see e.g.: Hewitt, 2008 & Sinhababu, 2010). I will discuss moral realism based on phenomenological introspection in my seventh post in this sequence. ↩︎

[19] By “ideal conditions”, I am envisioning a scenario that is perfectly suited for making progress on questions of philosophy. Imagine a setup that covers everyone’s needs and also provides access to all of the following: the world’s best (and most usefully organized) library or online library;
revived versions of history’s greatest moral philosophers;
contemporary philosophers eager to discuss their issues of expertise;
oracle artificial superintelligence intent on charitably (and passively) helping out by answering any well-posed questions;
life extension (in case one needs more than an ordinary lifetime to properly reflect);
advanced nootropics (so people could think faster or more accurately);
mind-altering technology (to e.g. experience what it is like to have different moral intuitions or experience yet-unknown states of mind);
etc., things in that spirit.
Furthermore, there would be some mechanism in place to gently break up epistemically unhealthy group dynamics (for example, if charismatic people’s influence on others’ opinions was disproportionate). Alternatively, the journey could also be undertaken in solitude. In general, we could imagine a mechanism in place to prevent anything that radically alters the intuitions and goals of our would-be philosophers in ways that are not intended. Needless to say, there is no uniquely correct notion of “ideal conditions for philosophical reflection,” and if different plausible setups lead to radically different results, that would just be an additional way in which moral realism via One Compelling Axiology could fail. ↩︎

[20] Someone may object that it doesn’t matter to them what the vast majority of people would conclude after philosophical reflection, because they have their own intuitions about what it means to do good for others, and because moral convergence is not necessarily the same as moral truth. I think this is a legitimate argument in a situation where people are pursuing different questions: If some people associate morality with words like ‘excitement,’ whereas other people associate it more with ‘seriousness,’ maybe that just means they are envisioning different things and are answering different questions when trying to systematize their moral intuitions. However, in the One Compelling Axiology scenario, I stipulate that there is agreement about what the question is, and that people who are relevantly similar to oneself with respect to how they approach existential questions also convergence on the same answer as everyone else. In that case, it would be weird to consider this fact irrelevant to one’s personal thinking about what it means to do good for others. ↩︎

[21] Note that just because there may not be a One Compelling Axiology does not mean that we should not expect ideal conditions for philosophical reflection to be useful, or that we should think that there is no difference between obviously silly views about what matters and views that are plausible or defensible. I think of the difference between well-done and poorly-done moral reasoning as a continuum with different peaks, representing different questions being asked. Rejecting One Compelling Axiology only means that we need to put in more legwork upfront in order to decide what types of questions we want to answer, but it does not mean that everything related to moral-philosophical practice is useless. ↩︎


DPiepgrass @ 2023-09-05T15:22 (+13)

I was about to make a comment elsewhere about moral realism when it occurred to me that I didn't have a strong sense of what people mean by "moral realism", so I whipped out Google and immediately found myself here. Given all those references at the bottom, it seems like you are likely to have correctly described what the field of philosophy commonly thinks of as moral realism, yet I feel like I'm looking at nonsense.

Moral realism is based on the word "real", yet I don't see anything I would describe as "real" (in the territory-vs-map sense) in Philippa Foot or Peter Railton's forms of "realism". Indeed, I found the entire discussion of "moral realism" here to be bewilderingly abstract and platonic, sorely lacking in connection to the physical world. If I didn't know these were supposed to be "moral realist" views, I would've classified them as non-realist with high confidence. Perplexingly absent from the discussion above are the key ideas I would personally have used to ground a discussion on this topic, ideas like "qualia", "the hard problem of consciousness", "ought vs is", "axioms of belief" or, to coin a phrase, "monadal experiencers".

At the same time, you mention in a reply that "anti-realism says there is no such thing" as "one true morality" which is consistent with my intuition of what anti-realism seems like it should mean ― that morality is fundamentally grounded in personal taste. But then, Foot and Railton's accounts also seem grounded in their personal tastes.

I'm no philosopher, just a humble (j/k) rationalist. So I would like to ask how you would classify my own account of "moral realism worthy of the name" as something that must ultimately be grounded in territory rather than map.

I have three ways of describing my system of "moral realism worthy of the name". One is to say that there is some territory that would lead us to an account of morality. This territory is as-yet undiscovered by modern science, but by reductionist analysis we can still say a lot about what this morality looks like (although there will probably be quite a bit of irreducible uncertainty about morality, until science can reveal more about the underlying territory). Another is to say that I have an axiom about qualia ― that monadal experiencers exist, and experience qualia. Finally, I would say that a "moral realism worthy of the name" is also concerned with the problem of deriving ought-statements from is-statements (note: tentatively I think I can define "X should be" as equivalent to "X is good" where "good" is used in its ordinary everyday secular sense, not in an ideological or religious sense.)

Please have a look at this summary of my views on Twitter. My question is, how does this view fit into the philosophy landscape? I mean, what terms from the Encyclopedia of Philosophy would you use to describe it? Is it realist or (paradoxically) anti-realist?

By the way...

I would call myself a moral realist if I could be convinced that there is One Compelling Axiology [....] I would count something as the One Compelling Axiology if all philosophers or philosophically-inclined reasoners, after having engaged in philosophical reflection under ideal conditions,[19] would deem the search for the One Compelling Axiology to be a sufficiently precise, non-ambiguous undertaking for them to have made up their minds rather than “rejected the question,” and if these people would all come to largely the same conclusions. [....] ideal conditions for philosophical reflection means having access to everything [...] including [...] superintelligent oracle AI.

This part reads to me as if you'd been asked "what would change your mind" and you responded "realistically, nothing." But then, my background involves banging my head against the wall with climate dismissives, so I have a visceral understanding that "science advances one funeral at a time" as Max Planck said. So my next thought, more charitably, is "well, maybe Lukas will make his judgement from the perspective of an imagined future where all necessary funerals have already taken place." Separately, I note that my conception of "realism" requires nothing like this, it just requires a foundation that is real, even if we don't understand it well.

Lukas_Gloor @ 2023-09-09T12:29 (+3)

Moral realism is based on the word "real", yet I don't see anything I would describe as "real" (in the territory-vs-map sense) in Philippa Foot or Peter Railton's forms of "realism". [...]

At the same time, you mention in a reply that "anti-realism says there is no such thing" as "one true morality" which is consistent with my intuition of what anti-realism seems like it should mean ― that morality is fundamentally grounded in personal taste. But then, Foot and Railton's accounts also seem grounded in their personal tastes.

Yeah, that's why I also point out that I don't consider Foot's or Railton's account worthy of the name "moral realism." Even though they've been introduced and discussed that way.

So I would like to ask how you would classify my own account of "moral realism worthy of the name" as something that must ultimately be grounded in territory rather than map.

I think it's surprisingly difficult to spell out what it would mean for morality to be grounded in the territory. My "One Compelling Axiology" version of moral realism constitutes my best effort at operationalizing what it would mean. Because if morality is grounded in the territory, that should be the cause for ideal reasoners to agree on the exact nature and shape of morality.

At this point of the argument, philosophers of a particular school tend to object and say something like the following: 

"It's not about what human reasoners think or whether there's convergence of their moral views as they become more sophisticated and better studied. Instead, it's about what's actually true! It could be that there's a true morality, but all human reasoners (even the best ones) are wrong about it."

But that sort of argument begs the question. What does it mean for something to be true if we could all be wrong about it even under ideal reasoning conditions? That's the part I don't understand. So, when I steelman moral realism, I assume that we're actually in a position to find out the moral truth. (At least that this is possible in theory, under the best imaginable circumstances.)

There's an endnote in a later post in my series that's quite relevant to this discussion. The post is Moral uncertainty and moral realism are in tension, and I'll quote the endnote here: 

Someone could object that convergence arguments [convergence arguments are a type of argument in favor of moral realism; they say that moral realism is true if sophisticated reasoners tend to converge in their moral views as they approach ideal reasoning conditions] are never strong enough to establish moral realism with high confidence. Firstly (1), what counts as “philosophically sophisticated reasoners” or “idealized reasoning conditions” is under-defined. Arguably, subtle differences to these stipulations could influence whether convergence arguments work out. Secondly (2), even conditional on expert convergence, we couldn’t be sure whether it reflects the existence of a speaker-independent moral reality. Instead, it could mean that our philosophically sophisticated reasoners happen to have the same subjective values. Thirdly (3), what reasoners consider self-evident may change over time. Wouldn’t sophisticated reasoners born in (e.g.) the 17th century disagree with what we consider self-evident today? Those are forceful objections. If we only applied the most stringent criteria for what counts as “moral realism,” we’d arguably be left with moral non-naturalism (“irreducible normativity”). After all, the only reason some philosophers consider non-naturalism (with its strange metaphysical postulates) palatable is because they find moral naturalism too watered down as an alternative. Still, I would consider convergence among a pre-selected set of expert reasoners both relevant and surprising. Therefore, I’m inclined to consider naturalist moral realism an intelligible hypothesis. I think it’s false, but I could imagine situations where I’d change my mind. Here are some quick answers to the objections above: (1) We can imagine circumstances where the convergence isn’t sensitive to the specifics; naturalist moral realism is meant to apply at least under those circumstances. (2) Without the concept of “irreducible normativity,” any answers in philosophy will be subjective in some sense of the word (they have to somehow appeal to our reasoning styles). Still, convergence arguments would establish that there are for-us relevant insights at the end of moral reflection, and that the destination is the same for everyone! (3) When I talk about “morality,” I already have in mind some implicit connotations that the concept has to fulfill. Specifically, I consider it an essential ingredient to morality to take an “impartial stance” of some sort. To the degree that past reasoners didn’t do this, I’d argue that they were answering a different question. (When I investigate whether moral realism is true, I’m not interested in whether everyone who ever used the word “morality” was talking about the exact same thing!) Among past philosophers who saw morality as impartial altruism, we actually find a surprising degree of moral foresight. Jeremy Bentham’s Wikipedia article reads as follows: “He advocated individual and economic freedoms, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and (in an unpublished essay) the decriminalising of homosexual acts. He called for the abolition of slavery, capital punishment and physical punishment, including that of children. He has also become known as an early advocate of animal rights.” To get a sense for the clarity and moral thrust of Bentham’s reasoning, see also this now-famous quote: “The day may come when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognised that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse? But a fullgrown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose they were otherwise, what would it avail? The question is not, Can they reason? nor Can they talk? but, Can they suffer?”

In the above endnote, I try to defend why I think my description of the One Compelling Axiology version of moral realism is a good steelman, despite some moral realists not liking it because I don't allow for the possibility that moral reality is forever unknowable to even the best human reasoners under ideal reasoning conditions.

This part reads to me as if you'd been asked "what would change your mind" and you responded "realistically, nothing." But then, my background involves banging my head against the wall with climate dismissives, so I have a visceral understanding that "science advances one funeral at a time" as Max Planck said. So my next thought, more charitably, is "well, maybe Lukas will make his judgement from the perspective of an imagined future where all necessary funerals have already taken place."

Definitely! I'm assuming "ideal reasoning conditions" – a super high bar, totally unrealistic in reality. For the sort of thing I'm envisioning, see my post, The Moral Uncertainty Rabbit Hole, Fully Excavated. Here's a quote from the section on "reflection procedures": 

Here’s one example of a reflection environment:

  • My favorite thinking environment: Imagine a comfortable environment tailored for creative intellectual pursuits (e.g., a Google campus or a cozy mansion on a scenic lake in the forest). At your disposal, you find a well-intentioned, superintelligent AI advisor fluent in various schools of philosophy and programmed to advise in a value-neutral fashion. (Insofar as that’s possible – since one cannot do philosophy without a specific methodology, the advisor must already endorse certain metaphilosophical commitments.) Besides answering questions, they can help set up experiments in virtual reality, such as ones with emulations of your brain or with modeled copies of your younger self. For instance, you can design experiments for learning what you'd value if you first encountered the EA community in San Francisco rather than in Oxford or started reading Derek Parfit or Peter Singer after the blog Lesswrong, instead of the other way around.[2] You can simulate conversations with select people (e.g., famous historical figures or contemporary philosophers). You can study how other people’s reflection concludes and how their moral views depend on their life circumstances. In the virtual-reality environment, you can augment your copy’s cognition or alter its perceptions to have it experience new types of emotions. You can test yourself for biases by simulating life as someone born with another gender(-orientation), ethnicity, or into a family with a different socioeconomic status. At the end of an experiment, your (near-)copies can produce write-ups of their insights, giving you inputs for your final moral deliberations. You can hand over authority about choosing your values to one of the simulated (near-)copies (if you trust the experimental setup and consider it too difficult to convey particular insights or experiences via text). Eventually, the person with the designated authority has to provide to your AI assistant a precise specification of values (the format – e.g., whether it’s a utility function or something else – is up to you to decide on). Those values then serve as your idealized values after moral reflection.

(Two other, more rigorously specified reflection procedures are indirect normativity and HCH.[3] Indirect normativity outputs a utility function whereas HCH attempts to formalize “idealized judgment,” which we could then consult for all kinds of tasks or situations.)[4]

“My favorite thinking environment” leaves you in charge as much as possible while providing flexible assistance. Any other structure is for you to specify: you decide the reflection strategy.[5] This includes what questions to ask the AI assistant, what experiments to do (if any), and when to conclude the reflection.

Part of the point of that quote is that there's some subjectivity about how to set up "ideal reasoning conditions" – but we can still agree that, for practical purposes, something like the above constitutes better reasoning conditions than what we have available today. And if the best reasoners in EA (for example) or some other context where people start out with good epistemics all tended to converge after that sort of reflection, I'd consider that strong evidence for (naturalist) moral realism, the way I prefer to define it. (But some philosophers would reject this steelman of moral realism and say that my position is always, by definition, anti-realism, no matter what we might discover in the future about expert convergence, because they only want to reason about morality with "irreducible" concepts, i.e., non-naturalist moral realism.) 

Please have a look at this summary of my views on Twitter. My question is, how does this view fit into the philosophy landscape?

Definitely seems realist. I'm always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists. They often embrace moral naturalism themselves, but there's a sense in which I think that's illegitimately trying to have their cake and eat it. See this comment discussion here below my post on hedonist moral realism and qualia-inspired views.

 I haven't found the time to look through your summary of views on Twitter in much detail, but my suspicion is that you'd run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of "whatever causes experts to converge their opinions under ideal reasoning conditions." 

DPiepgrass @ 2023-09-19T17:16 (+1)

my suspicion is that you'd run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of "whatever causes experts to converge their opinions under ideal reasoning conditions." 

In the absence of new scientific discoveries about the territory, I'm not sure whether experts (even "ideal" ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there's Beyond Meat); children raised by poor single moms who didn't want children.

I don't even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth's carrying capacity, and (iv) I don't see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism's linear valuation is.

So while I'm a moral realist, (i) I don't presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don't expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we'd get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn't make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)

[Addendum:]

I'm always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists.

Why? At the link you said "I'd think she's saying that pleasure has a property that we recognize as "what we should value" in a way that somehow is still a naturalist concept. I don't understand that bit." But by the same token ― if I assume Hewitt talking about "pleasure" is essentially the same thing as me talking about "valence" ― I don't understand why you seem to think it's "illegitimate" to suppose valence exists in the territory, or what you think is there instead.

Lukas_Gloor @ 2023-09-19T18:38 (+2)

So while I'm a moral realist, (i) I don't presume to know what the moral reality actually is

If you don't think you know what the moral reality is, why are you confident that there is one?

I discuss possible answers to this question here and explain why I find all of the unsatisfying. 

The only realism-compatible position I find somewhat defensible is something like "It may turn out that morality isn't a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don't know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that 'stands out' as the obvious way of going about altruism/making the world better. Therefore, I'm not yet willing to call myself a confident moral anti-realist."

That said, I give some arguments in my sequence why we shouldn't expect any theory to 'stand out' like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers.

Why? At the link you said "I'd think she's saying that pleasure has a property that we recognize as "what we should value" in a way that somehow is still a naturalist concept. I don't understand that bit." But by the same token ― if I assume Hewitt talking about "pleasure" is essentially the same thing as me talking about "valence" ― I don't understand why you seem to think it's "illegitimate" to suppose valence exists in the territory, or what you think is there instead.

This goes back to the same disagreement we're discussing, the one about expert consensus or lack thereof. The naturalist version of "value is a part of the territory" would be that when we introspect about our motivation and the nature of pleasure and so on, we'll agree that pleasure is what's valuable. However, empirically, many people don't conclude this; they aren't hedonists. (As I defend in the post, I think they aren't thereby making any sort of mistake. For instance, it's simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don't find the view compelling myself.) 

At this point, hedonists could either concede that there's no sense in which hedonism is true for everyone – because not everyone agrees. 

Or they can say something like "Well, it may not seem to you that you're making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions, and you're missing that, so you ARE making a mistake about normativity, even if you say you don't care." 

And then we're back to "How do they know this?" and "What's the point of 'normativity' if it's disconnected to what I (on reflection) want/what motivates me?" Etc. It's the same disagreement again. The reason I believe Hewitt and others want to have their cake and eat is because they want to simultaneously (1) downplay the relevance of empirical information about whether sophisticated reasoners find hedonism compelling (2) while still claiming that hedonism is correct in some direct, empirical sense, which makes it "part of the territory." The tension here is that claiming that "hedonism is correct in some direct, empirical sense" would predict expert convergence.

DPiepgrass @ 2023-09-19T20:36 (+1)

If you don't think you know what the moral reality is, why are you confident that there is one?

I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn't matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.

The naturalist version of "value is a part of the territory" would be that when we introspect about our motivation and the nature of pleasure and so on, we'll agree that pleasure is what's valuable.

I don't see why "introspecting on our motivation and the nature of pleasure and so on" should be what "naturalism" means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say "positive valence" over "pleasure" because laymen would misunderstand the latter.

At this point, hedonists could either concede that there's no sense in which hedonism is true for everyone – because not everyone agrees. 

I don't concede because people having incorrect maps is expected and tells me little about the territory.

Or they can say something like "Well, it may not seem to you that you're making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions

I'm not sure what these other dispositions are, but I'm thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that "knowledge is terminally good", for example, I wouldn't dismiss it entirely, but I don't see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because "obviously" pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I'd call that valuation nonrealist.)

claiming that "hedonism is correct in some direct, empirical sense" would predict expert convergence.

🤷‍♂️ Why? When you say "expert", do you mean "moral realist"? But then, which kind of moral realist? Obviously I'm not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.

Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.

Lukas_Gloor @ 2023-09-19T22:45 (+3)

I don't concede because people having incorrect maps is expected and tells me little about the territory.

I'm clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn't apparent. In physics or any other scientific domain, there's no question whether experts would eventually converge if they had ideal reasoning conditions. That's what makes these domains scientifically valid (i.e., they study "real things"). Why is morality different? (No need to reply; it feels like we're talking in circles.)

FWIW, I think it's probably consistent to have a position that includes (1) a wager for moral realism ("if it's not true, then nothing matters" – your wager is about the importance of qualia, but I've also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/"lack of plausible alternatives" argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that's where the wager comes in handy. (Still, one could argue that tranquilism is 'simpler' than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn't quite "being confident in moral realism," though. It's only "confidence in acting as though moral realism is true."

I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don't believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I'm an EA, after all. I'm just saying that it's worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)

I also talk a bit about consciousness realism in endnote 18 of my post "Why Realists and Anti-Realists Disagree." I want to flag that I personally don't understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there's more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren't denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It's the same as compatibilists in the free will debate: you never wanted "true free will," whatever that is. Just like one can be mistaken about one's visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They're not mistaken about a non-technical interpretation of "it feels like something to be me," because that's just how we describe the fact that there's something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of "it feels like something to be me," where the hidden assumption is something like "feeling like something is a property that is either on or off for something, and there's always a fact of the matter." See the dialogue in endnote 18 of that post on why this isn't correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There's a sense in which anti-realists aren't denying that "torture is wrong" in a loose and not-too-philosophically loaded sense. They're just denying that based on "torture is wrong," we can infer that there's a fact of the matter about all courses of action – whether they're right or wrong.) Basically, the point I'm trying to make here is that illusionists aren't disagreeing with you if you say your conscious. They're only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it's conscious or not, that there must be a fact of the matter. But just because it feels to you like there's a fact of the matter doesn't mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can't draw crisp boundaries about what may or may not be 'conscious.'  That's why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.

undefined @ 2018-05-22T17:23 (+8)

What do you think are the implications of moral anti-realism for choosing altruistic activities?

Why should we care whether or not moral realism is true?

(I would understand if you were to say this line of questions is more relevant to a later post in your series.)

undefined @ 2018-05-23T13:13 (+8)

Why should we care whether or not moral realism is true?

I plan to address this more in a future post, but the short answer is this that for some ways in which moral realism has been defined, it really doesn't matter (much). But there are some versions of moral realism that would "change the game" for those people who currently reject them. And vice-versa, if one currently endorses a view that corresponds to the two versions of "strong moral realism" described in the last section of my post, one's priorities could change noticeably if one changes one's mind towards anti-realism.

What do you think are the implications of moral anti-realism for choosing altruistic activities?

It's hard to summarize this succinctly because for most of the things that are straightforwardly important under moral realism (such as moral uncertainty or deferring judgment to future people who are more knowledgeable about morality), you can also make good arguments in favor of them going from anti-realist premises. Some quick thoughts:

– The main difference is that things become more "messy" with anti-realism.

– I think anti-realists should, all else equal, be more reluctant to engage in "bullet biting" where you abandon some of your moral intuitions in favor of making your moral view "simpler" or "more elegant." The simplicity/elegance appeal is that if you have a view with many parameters that are fine-tuned for your personal intuitions, it seems extremely unlikely that other people would come up with the same parameters if they only thought about morality more. Moral realists may think that the correct answer to morality is one that everyone who is knowledgeable enough would endorse, whereas anti-realists may consider this a potentially impossible demand and therefore place more weight on finding something that feels very intuitively compelling on the individual level. Having said that, I think there are a number of arguments why even an anti-realist might want to adopt moral views that are "simple and elegant." For instance, people may care about doing something meaningful that is "greater than their own petty little intuitions" – I think this is an intuition that we can try to cash out somehow even if moral realism turns out to be false (it's just that it can be cashed out in different ways).

– "Moral uncertainty" works differently under anti-realism, because you have to say what you are uncertain about (it cannot be the one true morality because anti-realism says there is no such thing). One can be uncertain about what one would value after moral reflection under ideal conditions. This kind of "valuing moral reflection" seems like a very useful anti-realist alternative to moral uncertainty. The difference is that "valuing reflection" may be underdefined, so anti-realists have to think about how to distinguish having underdefined values from being uncertain about their values. This part can get tricky.

– There was recently a discussion about "goal drift" in the EA forum. I think it's a bigger problem with anti-realism all else equal (unless one's anti-realist moral view is egoism-related.) But again, there are considerations that go into both directions. :)

undefined @ 2018-05-22T17:23 (+1)

One thought is that if morality is not real, then we would not have reasons to do altruistic things. However, I often encounter anti-realists making arguments about which causes we should prioritize, and why. A worry about that is that if morality boils down to mere preference, then it is unclear why a different person should agree with the anti-realist's preference.

undefined @ 2018-05-22T17:35 (+5)

So you know who's asking, I happen to consider myself a realist, but closest to the intersubjectivism you've delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure. There's not much more objective or "facty" about rationality than the fact that basically all vertebrates are disposed to be averse to those things, and it's rather puzzling for someone not to be. People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.

I don't know whether or not you'd label that as objectivism about color or about rationality/harm. But I'd classify it as a weak form of realism and objectivism because people can be incorrect, and those who are not reliably disposed to identify cases correctly would be considered blind to color or to harm.

These things I'm saying are influenced by Joshua Gert, who holds very similar views. You may enjoy his work, including his Normative Bedrock (2012) or Brute Rationality (2004). He is in turn influenced by his late father Bernard Gert, whose normative ethical theory Josh's metaethics work complements.

undefined @ 2018-05-22T22:49 (+1)

The idea is that morality is the set of rules that impartial, rational people would advocate as a public system.

Yes, this sounds like constructivism. I think this is definitely a useful framework for thinking about some moral/morality-related questions. I don't think all of moral discourse is best construed as being about this type of hypothetical rule-making, but like I say in the post, I don't think interpreting moral discourse should be the primary focus.

Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure.

Hm, this sounds like you're talking about a substantive concept of rationality, as opposed to a merely "procedural" or "instrumental" concept of rationality (such as it's common on Lesswrong and with anti-realist philosophers like Bernard Williams). Substantive concepts of rationally always go under moral non-naturalism, I think.

My post is a little confusing with respect to the distinction here, because you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and "secondarily" as a form of non-naturalism. (See my comments on Thomas Sittler's chart.)

People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red.

Yeah, it should be noted that "strong" versions of moral realism are not committed to silly views such as morality existing in some kind of supernatural realm. I often find it difficult to explain moral non-naturalism in a way that makes it sound as non-weird as when actual moral non-naturalists write about it, so I have to be careful to not strawman these positions. But what you describe may still qualify as "strong" because you're talking about rationality as a substantive concept. (Classifying something as a "harm" is one thing if done in a descriptive sense, but probably you're talking about classifying things as a harm in a sense that has moral connotations – and that gets into more controversial territory.)

The book title "normative bedrock" also sounds relevant because my next post will talk about "bedrock concepts" (Chalmers) at length, and specifically about "irreducible normativity" as a bedrock concept, which I think makes up the core of moral non-naturalism.

undefined @ 2018-05-23T05:55 (+4)

Thanks for your engaging insights!

this sounds like you're talking about a substantive concept of rationality

Yes indeed!

Substantive concepts of rationally always go under moral non-naturalism, I think.

I'm unclear on why you say this. It certainly depends on how exactly 'non-naturalism' is defined.

One contrast of the Gert-inspired view I've described and that of some objectivists about reasons or substantive rationality (e.g. Parfit) is that the latter tend to talk about reasons as brute normative facts. Sometimes it seems they have no story to tell about why those facts are what they are. But the view I've described does have a story to tell. The story is that we had a certain robust agreement in response toward harms (aversion to harms and puzzlement toward those who lack the aversion). Then, as we developed language, we developed terms to refer to the things that tend to elicit these responses.

Is that potentially the subject of the 'natural' sciences? It depends: it seems to be the subject not of physical sciences but of psychological and linguistic sciences. So it depends whether psychology and linguistics are 'natural' sciences. Does this view hold that facts about substantive rationality are not identical with or reducible to any natural properties? It depends on whether facts about death, pain, injury, and dispositions are reducible to natural properties.

It's not clear to me that the natural/non-natural distinction applies all that cleanly to the Gert-inspired view I've delineated. At least not without considerably clarifying both the natural/non-natural distinction and the Gert-inspired view.

you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and "secondarily" as a form of non-naturalism.

This seems like a really interesting point, but I'm still a little unclear on it.

Rambling a bit

It's helpful to me that you've pointed out that my Gert-inspired view has an objectivist element at the 'normative bedrock' level (some form of realism about harms & rationality) and a constructivist element at the level of choosing first-order moral rules ('what would impartial, rational people advocate in a public system?').

A question that I find challenging is, 'Why should I care about, or act on, what impartial, rational people would advocate in a public system?' (Why shouldn't I just care about harms to, say, myself and a few close friends?) Constructivist answers to that question seem inadequate to me. So it seems we are forced to choose between two unsatisfying answers. On the one hand, we might choose a minimally satisfying realism that asserts that it's a brute fact that we should care about people and apply moral rules to them impartially; it's a brute fact that we 'just see'. On the other hand, we might choose a minimally satisfying anti-realism that asserts that caring about or acting on morality is not actually something we should do; the moral rules are what they are and we can choose it if our heart is in it, but there's not much more to it than hypotheticals.

MichaelA @ 2020-06-13T09:57 (+4)

Thanks for this post - this topic seems quite important to me, and I think this post has reduced my confusion and sharpened my thinking. I look forward to reading the later posts in the sequence.

However, not committing to any specific perspective calls into question whether there even is, in theory, a correct answer. If there are many different and roughly equally plausible interpretations of “impartial perspective” or “desire fulfillment” (or more generally: of well-being defined as “that which is good for a person”), then the question, “Which of these different accounts is correct?” may not have an answer.

I found this argument confusing. Wouldn't it be acceptable, and probably what we'd expect, for a metaethical view to not also provide answers on normative ethics or axiology? It seems that finding out there are "speaker-independent moral facts, rules or values" would be quite important, even if we don't yet know what those facts are. And it doesn't seem that not yet knowing those facts should be taken as strong evidence that those facts don't exist? Perhaps the different interpretations are equally plausible at the moment, but as we learn and debate more we will come to see some interpretations as more plausible?

Analogously, you and I could agree that there is an objectively correct and best answer to the question "What percentage of Americans are allergic to bees?", despite not yet knowing what that answer is. And then we could look it up. Whereas if we believed there wasn't an objectively correct and best answer, we might decide our current feelings about that question are the best thing we'd get, and we might not bother looking it up. And it doesn't seem like we should take "we don't yet know the percentage" as strong evidence that there is no correct percentage.

Is there a reason that analogy doesn't hold in the case of moral realism vs anti-realism? Or am I misunderstanding the paragraph I quoted above?

(To be clear, I'm not trying to imply that the case for moral anti-realism is as weak as the case for allergy-percentage anti-realism. Moral anti-realism seems quite plausible to me. I'm just trying to understand the particular argument I quoted above.)

Lukas_Gloor @ 2020-06-14T11:10 (+2)
I found this argument confusing. Wouldn't it be acceptable, and probably what we'd expect, for a metaethical view to not also provide answers on normative ethics or axiology?

I'm not saying metaethical views have to advance a particular normative-ethical theory. I'm just saying that if a realist metaethical view doesn't do this, it becomes difficult to explain how proponents of this view could possibly know that there really is "a single correct theory."

So for instance, looking at the arguments by Peter Railton, it's not clear to me whether Railton even expects there to be a single correct moral theory. His arguments leave morality under-defined. "Moral realism" is commonly associated with the view that there's a single correct moral theory. Railton has done little to establish this, so I think it's questionable whether to call this view "moral realism."

Of course, "moral realism" is just a label. It matters much more that we have clarity about what we're discussing, instead of which label we pick. If someone wants to use the term "moral realism" for moral views that are explicitly under-defined (i.e., views according to which many moral questions don't have an answer), that's fine. In that sense, I would be a "realist."

It seems that finding out there are "speaker-independent moral facts, rules or values" would be quite important, even if we don't yet know what those facts are.

One would think so, but as I said, it depends on what we mean exactly by "speaker-independent moral facts." On some interpretations, those facts may be forever unknowable. In that case, knowledge that those facts exist would be pointless in practice.

I write more about this in my 3rd post, so maybe the points will make more sense with the context there. But really the main point of this 1st post is that I make a proposal in favor of being cautious about the label "moral realism" because, in my view, some versions of it don't seem to have action-guiding implications for how to go about effective altruism.

(I mean, if I had started out convinced of moral relativism, then sure, "moral realism" in Peter Railton's sense would change my views in very action-guiding ways. But moral relativists are rare. I feel like one should draw the realism vs. anti-realism distinction in a place where it isn't obvious that one side is completely wrong. If we draw the distinction in such a way that Peter Railton's view qualifies as "moral realism," then it would be rather trivial that anti-realism was wrong. This would seem uncharitable to all the anti-realist philosophers who have done important work on normative ethics.)

undefined @ 2018-06-04T12:29 (+4)

Thanks for writing this, Lukas. :-)

As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.

As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is 'yes'. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]

“Normative force for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality). One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).

undefined @ 2018-06-05T18:27 (+1)

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness).

Cool! I think the closest I'll come to discussing this view is in footnote 18. I plan to have a post on moral realism via introspection about the intrinsic goodness (or badness) of certain conscious states.

I agree with reductionism about personal identity and I also find this to be one of the most persuasive arguments in favor of altruistic life goals. I would not call myself an open indvidualist though because I'm not sure what the position is exactly saying. For instance, I don't understand how it differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.
Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity. (It just turns out that egoism is not a well-defined concept either, and one has to make some judgment calls if one ever expects to encounter edge-cases for which our intuitions give no obvious answers about whether something is still "me.")

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).

Yeah, fair point. I mean, even Railton's own view has plenty of practical relevance in the sense that it highlights that certain societal arrangements lead to more overall well-being or life satisfaction than others. (That's also a point that Sam Harris makes.) But if that's all we mean by "moral realism" then it would be rather trivial. Maybe my criteria are a bit too strict, and I would indeed already regard it as extremely surprising if you get something like One Compelling Axiology that agrees on population ethics while leaving a few other things underdetermined.

undefined @ 2018-06-06T08:05 (+1)

Thanks for your reply :-)

For instance, I don't understand how [open individualism] differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.

I agree completely. I identify equally as an open and empty individualist. As I've written elsewhere (in You Are Them): "I think these 'positions' are really just two different ways of expressing the same truth. They merely define the label of 'same person' in different ways."

Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity.

I guess it depends on what those egoistic goals are. The fact that some egoistic goals are highly instrumentally useful for the benefit of others (even if one doesn't intend to benefit others, cf. Smith's invisible hand, the deep wisdom of Ayn Rand, and also, more generally, the fact that many of our selfish desires probably shouldn't be expected to be that detrimental to others, or at least our in-group, given that we evolved as social creatures) is, I think, a confounding factor that makes it seem plausible to say that pursuing them is coherent/non-problematic (in light of a reductionist view of personal identity). Yet if it is transparent that the pursuit of these egoistic goals comes at the cost of many other beings' intense suffering, I think we would be reluctant to say that pursuing them is "perfectly coherent" (especially in light of such a view of personal identity, yet many would probably even say it regardless; one can, for example, also argue it is incoherent with reference to inconsistency: "we should not treat the same/sufficiently similar entities differently"). For instance, would we, with this view of personal identity, really claim that it is "perfectly coherent" to choose to push button A: "you get a brand new pair of shorts", when we could have pushed button B: "You prevent 100 years of torture (for someone else in one sense, yet for yourself in another, quite real sense) which will not be prevented if you push button A". It seems much more plausible to deem it perfectly coherent to have a selfish desire to start a company or to signal coolness or otherwise gain personal satisfaction by being an effective altruist.

But if that's all we mean by "moral realism" then it would be rather trivial.

I don't quite understand why you would call this trivial. Perhaps it is trivial that many of us, perhaps even the vast majority, agree. Yet, as mentioned, the acceptance of a principle like "avoid causing unnecessary suffering" is extremely significant in terms of its practical implications; many have argued that it implies the adoption of veganism (where the effects on wildlife as a potential confounding factor is often disregarded, of course), and one could even employ it to argue against space colonization (depending on what we hold to constitute necessity). So, in terms of practical consequences at least, I'm almost tempted to say that it could barely be more significant. And it's not clear to me that agreement on a highly detailed axiology would necessarily have significantly more significant, or even more clear, implications than what we could get off the ground from quite crude principles (it seems to me there may well be strong diminishing returns here, if you will, as you can also seem to agree weakly with in light of the final sentence of your reply). Also because the large range of error produced by empirical uncertainty may, on consequentialist views at least, make the difference in practice between realizing a detailed and a crude axiology a lot less clear than the difference between the two axiologies at the purely theoretical level -- perhaps even so much so as to make it virtually vanish in many cases.

Maybe my criteria are a bit too strict [...]

I'm just wondering: too strict for what purpose?

This may seem a bit disconnected, but I just wanted to share an analogy I just came to think of: Imagine mathematics were a rather different field where we only agreed about simple arithmetic such as 2 + 2 = 4, and where everything beyond that were like the Riemann hypothesis: there is no consensus, and clear answers appear beyond our grasp. Would we then say that our recognition that 2 + 2 = 4 holds true, at least in some sense (given intuitive axioms, say), is trivial with respect to asserting some form of mathematical realism? And would finding widely agreed-upon solutions to our harder problems constitute a significant step toward deciding whether we should accept such a realism? I fail to see how it would.

g@leuenberger.ai @ 2023-10-18T13:52 (+1)

Empty individualism is quite different from open individualism. 
Empty individualism says that you only exist during the present fraction of a second. This leads to the conclusion that no matter what action you take, the amount of pain or pleasure you will experience as a consequence thereof will remain zero. This therefore leads to nihilism.
Open Individualism on the other hand says that you will be repeatedly reincarnated as every human that will ever live. In the words of David Pearce: “If open individualism is true, then the distinction between decision-theoretic rationality and morality (arguably) collapses. An intelligent sociopath would do the same as an intelligent saint; it’s all about me.” 
This means that the egoistic sociopaths would change themselves into altruists of sorts.

The only way I know of in which empty individualism can lead towards open individualism works as follows:
When choosing which action to take, one should select the action which leads to the least amount of suffering for oneself. If there were a high probability that empty individualism is true and a very small but non-zero probability that open individualism is true, one would still have to take the action dictated by open individualism because empty individualism stays neutral with regads to which action to take, thereby making itself irrelevant.

Note however that empty individualism vs open individualism is a false dichotomy as there are other contenders such as closed individualism which is the common-sensical view, at least here in the West. So since empty individualism makes itself irrelevant, at least for now the contention is just between open individualism and closed individualism. It would in principle certainly be possible to calculate whether open individualism or closed individualism is more likely to be true. Furthermore, it would be possible to calculate whether AGI would be open individualist towards humanity or not. To conduct such a caclulation successfully before the singularity would however require a collaboration between many theoreticians.

Ben_Snodin @ 2020-07-03T08:38 (+3)

Are the links to the footers broken?

(really enjoying the post by the way)

Lukas_Gloor @ 2020-07-03T12:56 (+3)

Thanks!

At the time when I wrote this post, the formatting either didn't yet allow the hyperlinked endnotes, or (more likely) I didn't know how to do the markdown. I plan to update the endnotes here so they become more easily readable.

Update 7/7/2020: I updated the endnotes.

undefined @ 2018-05-23T14:38 (+3)

Thanks for putting this out there. I like how you list the two versions of moral realism you find coherent, and especially that you list what would convince you of each.

My intuition here is the first option is the case, but also that instead of speaking about moral realism we should talk about qualia formalism. I.e., whether consciousness is real enough such that it can be spoken about in crisp formal terms, seems prior to whether morality is real in that same sense. I've written about this here, and spoke about this in the intro of my TSC2018 talk.

Whether qualia formalism is true seems an empirical question; if it is, we should be able to make novel and falsifiable predictions with it. This seems like a third option for moving forward, in addition to your other two.

undefined @ 2018-05-30T16:37 (+2)

The descriptive task of determining what ordinary moral claims mean may be more relevant to questions about whether there are objective moral truths than is considered here. Are you familiar with Don Loeb's metaethical incoherentism? Or the empirical literature on metaethical variability? I recommend Loeb's article, "Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat." The title itself indicates what Loeb is up to.

undefined @ 2018-05-31T09:17 (+1)

Inspired by another message of yours, there's at least one important link here that I failed to mention: If moral discourse is about a, b, and c, and philosophers then say they want to make it about q and argue for realism about q, we can object that whatever they may have shown us regarding realism about q, it's certainly not moral realism. And it looks like the Loeb paper also argues that if moral discourse is about mutually incompatible things, that looks quite bad for moral realism? Those are good points!

undefined @ 2018-05-24T13:15 (+1)

Thanks for writing this up in a fairly accessible manner for laypeople like me. I am looking forward to the next posts. So far, I have only one reflection on the following bit of your thinking. It is a side point but it probably would help me to better model your thinking.

And all I’m thinking is, “Why are we so focused on interpreting religious claims? Isn’t the major question here whether there are things such as God, or life after death!?” The question that is of utmost relevance to our lives is whether religion’s metaphysical claims, interpreted in a straightforward and realist fashion, are true or not. An analysis of other claims can come later.

Do you think analyses of the other claims are never of more value than analyses of the metaphysical claims?

Because my initial reaction to your claim was something like "why would we focus on whether there is a god or life after death - it seems hardly possible to make substantial advances there in a meaningful way and these texts were meant to point at something a lot more trivial. They are disguised as profound and with metaphysical explanations only to make people engage with and respect them in times where no other tools were available to do so on a global level."

I.e. no matter the answer to the metaphysical questions, it could be useful to interpret religious claims because they could be pointing at something that people thought would help to structure society, whether the metaphysical claims hold or not.

Thus, I wonder whether the bible example is a little weak. You would have to clarify that you assume that people sometimes actually believe they are having a meaningful discussion around "what's Real Good?", assuming moral realism through god(?), as opposed to just engaging in intellectual masturbation, consciously or not.

If I do not take those people (who suppose moral realism proven through bible) seriously, I can operate based on the assumption that the authors of such writings supposed any form of moral non-naturalism, subjectivism, intersubjectivism or objectivism, as described by you. Any of which could have led to the idea of creating better mechanisms to enforce either the normative Good, the social contract, or allow everyone to maximally realise their own desires by creating an authority ("god") that allows to move society into a better equilibrium for any of these theories.

In that case, taking the claims about the (metaphysical) nature of that authority to be of any value of information/as providing valuable ground for discussion seems to be a waste of time or even giving them undeserved attention and credit, distracting from more important questions. Your described reaction though takes the ideas seriously and I wonder why you think there is any ground to even consider them as such?

I think this concern is somewhat relevant to the broader discussion, too, because you seem to imply that we can't (or even shouldn't?) make any advances on non-metaphysical claims before we haven't figured out the metaphysical ones. Though, what you mean is probably more along the lines of "be ready to change everything once we have figured out moral philosophy", not implying that we shouldn't do anything else in the meantime. Is that correct? If so, this point might get lost if not pronounced more prominently.

undefined @ 2018-05-25T16:18 (+1)

Probably intuitions about this issue depend on which type of moral or religious discourse one is used to. As someone who spent a year at a Christian private school in Texas where creationism was taught in Biology class and God and Jesus were a very tangible part of at least some people's lives, I definitely got a strong sense that the metaphysical questions are extremely important.

By contrast, if the only type of religious claims I'd ever came into contact with had been moderate (picture the average level of religiosity of a person in, say, Zurich), then one may even consider it a bit of a strawman to assume that religious claims are to be taken literally.

I think this concern is somewhat relevant to the broader discussion, too, because you seem to imply that we can't (or even shouldn't?) make any advances on non-metaphysical claims before we haven't figured out the metaphysical ones.

Just to be clear, all I'm saying is that I think it's going to be less useful to discuss "what are moral claims usually about." What we should instead do is instead what Chalmers describes (see the quote in footnote 4). Discussing what moral claims are usually about is not the same as making up one's mind about normative ethics. I think it's very useful to discuss normative ethics, and I'd even say that discussing whether anti-realism or realism is true might be slightly less important than making up one's mind about normative ethics. Sure, it informs to some extent how to reason about morality, but as has been pointed out, you can make some progress about moral questions also from a lens of agnosticism about realism vs. anti-realism.

To go back to the religion analogy, what I'm recommending is to first figure out whether you believe in a God or an afterlife that would relevantly influence your priorities now, and not worry much about whether religious claims are "usually" or "best" to be taken literally or taken metaphorically(?).