Morality vs related concepts
By MichaelAđ¸ @ 2020-02-10T08:02 (+19)
How can you know Iâm talking about morality (aka ethics), rather than something else, when I say that I âshouldâ do something, that humanity âoughtâ to take certain actions, or that something is âgoodâ? What are the borderlines and distinctions between morality and the various potential âsomething elseâs? How do they overlap and interrelate?
In this post, I try to collect together and summarise philosophical concepts that are relevant to the above questions.[1] I hope this will benefit readers by introducing them to some thought-clarifying conceptual distinctions they may not have been aware of, as well as terms and links they can use to find more relevant info. In another post, I similarly discuss how moral uncertainty differs from and overlaps with related concepts.
Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, Iâm not an expert in these topics; indeed, I expect many readers to know more than me about at least some of them, and one reason I wrote this was to help me clarify my own understandings. Iâd appreciate feedback or comments in relation to any mistakes, poor phrasings, etc. (and just in general!).
Also note that my intention here is mostly to summarise existing ideas, rather than to provide original ideas or analysis.
Normativity
A normative statement is any statement related to what one should do, what one ought to do, which of two things are better, or similar. âSomething is said by philosophers to have ânormativityâ when it entails that some action, attitude or mental state of some other kind is justified, an action one ought to do or a state one ought to be inâ (Darwall). Normativity is thus the overarching category (superset) of which things like morality, prudence (in the sense explained below), and arguably rationality are just subsets.
This matches the usage of ânormativeâ in economics, where normative claims relate to âwhat ought to beâ (e.g., âThe government should increase its spendingâ), while positive claims relate to âwhat isâ (including predictions, such as what effects an increase in government spending may have). In linguistics, the equivalent distinction is between prescriptive approaches (involving normative claims about âbetterâ or âcorrectâ uses of language) and descriptive approaches (which are about how language is used).
Prudence
Prudence essentially refers to the subset of normativity that has to do with oneâs own self-interest, happiness, or wellbeing (see here and here). This contrasts with morality, which may include but isnât limited to oneâs self-interest (except perhaps for egoist moral theories).
For example (based on MacAskill p. 41), we may have moral reasons to give money to GiveWell-recommended charities, but prudential reasons to spend the money on ourselves, and both sets of reasons are ânormatively relevantâ considerations.
(The rest of this section is my own analysis, and may be mistaken.)
I would expect that the significance of prudential reasons, and how they relate to moral reasons, would differ depending on the moral theories one is considering (e.g., depending on which moral theories one has some belief in). Considering moral and prudential reasons separately does seem to make sense in relation to moral theories that donât precisely mandate specific behaviours; for example, moral theories that simply forbid certain behaviours (e.g., violating peopleâs rights) while otherwise letting one choose from a range of options (e.g., donating to charity or not).[2]
In contrast, âmaximisingâ moral theories like classical utilitarianism claim that the only action one is permitted to take is the very best action, leaving no room for choosing the âprudentially bestâ action out of a range of âmorally acceptableâ actions. Thus, in relation to maximising theories, it seems like keeping track of prudential reasons in addition to moral reasons, and sometimes acting based on prudential rather than moral reasons, would mean that one is effectively either:
- using a modified version of the maximising moral theory (rather than the theory itself), or
- acting as if âmorally uncertainâ between the maximising moral theory and a âmoral theoryâ in which prudence is seen as âintrinsically valuableâ.
Either way, the boundary between prudence and morality seems to become fuzzier or less meaningful in such cases.[3]
(Instrumental) Rationality
(This section is sort-of my own analysis, and may be mistaken or use terms in unusual ways.)
Rationality, in one important sense at least, has to do with what one should do or intend, given oneâs beliefs and preferences. This is the kind of rationality that decision theory often is seen as invoking. It can be spelled out in different ways. One is to see it as a matter of coherence: It is rational to do or intend what coheres with oneâs beliefs and preferences (Broome, 2013; for a critic, see Arpaly, 2000).
Using this definition, it seems to me that:
- Rationality can be considered a subset of normativity in which the âshouldâ statements, âoughtâ statements, etc. follow in a systematic way from oneâs beliefs and preferences.
- Whether a âshouldâ statement, âoughtâ statement, etc. is rational is unrelated to the balance of moral or prudential reasons involved. E.g., what I ârationally shouldâ do relates only to morality and not prudence if my preferences relate only to morality and not prudence, and vice versa. (And situations in between those extremes are also possible, of course).[4]
For example, the statement âRationally speaking, I should buy a Ferrariâ is true if (a) I believe that doing so will result in me possessing a Ferrari, and (b) I value that outcome more than I value continuing to have that money. And it doesnât matter whether the reason I value that outcome is:
- Prudential: based on self-interest;
- Moral: e.g., Iâm a utilitarian who believes that the best way I can use my money to increase universe-wide utility is to buy myself a Ferrari (perhaps it looks really red and shiny and my biases are self-serving the hell out of me);
- Some mixture of the two.
Epistemic rationality
Note that that discussion focused on instrumental rationality, but the same basic points could be made in relation to epistemic rationality, given that epistemic rationality itself âcan be seen as a form of instrumental rationality in which knowledge and truth are goals in themselvesâ (LW Wiki).
For example, I could say that, from the perspective of epistemic rationality, I âshouldnâtâ believe that buying that Ferrari will create more utility in expectation than donating the same money to AMF would. This is because holding that belief wonât help me meet the goal of having accurate beliefs.
Whether and how this relates to morality would depend on whether the âdeeper reasonsâ why I prefer to have accurate beliefs (assuming I do indeed have that preference) are prudential, moral, or mixed.[5]
Subjective vs objective
Subjective normativity relates to what one should do based on what one believes, whereas objective normativity relates to what one âactuallyâ should do (i.e., based on the true state of affairs). Greaves and Cotton-Barratt illustrate this distinction with the following example:
Suppose Alice packs the waterproofs but, as the day turns out, it does not rain. Does it follow that Alice made the wrong decision? In one (objective) sense of âwrongâ, yes: thanks to that decision, she experienced the mild but unnecessary inconvenience of carrying bulky raingear around all day. But in a second (more subjective) sense, clearly it need not follow that the decision was wrong: if the probability of rain was sufficiently high and Alice sufficiently dislikes getting wet, her decision could easily be the appropriate one to make given her state of ignorance about how the weather would in fact turn out. Normative theories of decision-making under uncertainty aim to capture this second, more subjective, type of evaluation; the standard such account is expected utility theory.[6][7]
This distinction can be applied to each subtype of normativity (i.e., morality, prudence, etc.).
(I discuss this distinction further in my post Moral uncertainty vs related concepts.)
Axiology
The term axiology is used in different ways in different ways, but the definition weâll focus on here is from the Stanford Encyclopaedia of Philosophy:
Traditional axiology seeks to investigate what things are good, how good they are, and how their goodness is related to one another. Whatever we take the âprimary bearersâ of value to be, one of the central questions of traditional axiology is that of what stuffs are good: what is of value.
The same article also states: âFor instance, a traditional question of axiology concerns whether the objects of value are subjective psychological states, or objective states of the world.â
Axiology (in this sense) is essentially one aspect of morality/ethics. For example, classical utilitarianism combines:
- the principle that one must take actions which will lead to the outcome with the highest possible level of value, rather than just doing things that lead to âgood enoughâ outcomes, or just avoiding violating peopleâs rights
- the axiology that âwell-beingâ is what has intrinsic value
The axiology itself is not a moral theory, but plays a key role in that moral theory.
Thus, one canât have an axiological âshouldâ statement, but oneâs axiology may influence/inform oneâs moral âshouldâ statements.
Decision theory
(This section is sort-of my own commentary, may be mistaken, and may accidentally deviate from standard uses of terms.)
It seems to me that the way to fit decision theories into this picture is to say that one must add a decision theory to one of the âsources of normativityâ listed above (e.g., morality) in order to get some form of normative (e.g., moral) statements. However, a decision theory canât âgenerateâ a normative statement by itself.
For example, suppose that I have a moral preference for having more money rather than less, all other things held constant (because I wish to donate it to cost-effective causes). By itself, this canât tell me whether I âshouldâ one-box or two-box in Newcombâs problem. But once I specify my decision theory, I can say whether I âshouldâ one-box or two-box. E.g., if Iâm a causal decision theorist, I âshouldâ two-box.
But if I knew only that I was a causal decision theorist, it would still be possible that I âshouldâ one-box, if for some reason I preferred to have less money. Thus, as stated, we must specify (or assume) both a set of preferences and a decision theory in order to arrive at normative statements.
Metaethics
While normative ethics addresses such questions as "What should I do?", evaluating specific practices and principles of action, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the nature of ethical properties and evaluations. (Wikipedia)
Thus, metaethics is not directly normative at all; it isnât about making âshouldâ, âoughtâ, âbetter thanâ, or similar statements. Instead, itâs about understanding the ânatureâ of (the moral subset of) such statements, âwhere they come fromâ, and other such fun/spooky/nonsense/incredibly important matters.
Metanormativity
Metanormativity relates to the ânorms that govern how one ought to act that take into account oneâs fundamental normative uncertaintyâ. Normative uncertainty, in turn, is essentially a generalisation of moral uncertainty that can also account for (uncertainty about) prudential reasons. I will thus discuss the topic of metanormativity in my next post, on Moral uncertainty vs related concepts.
As stated earlier, I hope this usefully added to/clarified the concepts in your mental toolkit, and Iâd welcome any feedback or comments!
(In particular, if you think thereâs another concept whose overlaps with/distinctions from âmoralityâ are worth highlighting, either let me know to add it, or just go ahead and explain it in the comments yourself.)
This post wonât attempt to discuss specific debates within metaethics, such as whether or not there are âobjective moral factsâ, and, if there are, whether or not these facts are ânaturalâ. Very loosely speaking, Iâm not trying to answer questions about what morality itself actually is, but rather about the overlaps and distinctions between what morality is meant to be about and what other topics that involve âshouldâ and âoughtâ statements are meant to be about. âŠď¸
Considering moral and prudential reasons separately also seems to make sense for moral theories which see supererogation as possible; that is, theories which see some acts as âmorally good although not (strictly) requiredâ (SEP). If we only believe in such theories, we may often find ourselves deciding between one act thatâs morally âgood enoughâ and another (supererogatory) act thatâs morally better but prudentially worse. (E.g., perhaps, occasionally donating small sums to whichever charity strikes oneâs fancy, vs donating 10% of oneâs income to charities recommended by Animal Charity Evaluators.) âŠď¸
The boundary seems even fuzzier when you also consider that many moral theories, such as classical or preference utilitarianism, already consider oneâs own happiness or preferences to be morally relevant. This arguably makes also considering âprudential reasonsâ look like simply âdouble-countingâ oneâs self-interest, or giving it additional âweightâ. âŠď¸
If we instead used a definition of rationality in which preferences must only be based on self-interest, then I believe rationality would become a subset of prudence specifically, rather than of normativity as a whole. It would still be the case that the distinctive feature of rational âshouldâ statements is that they follow in a systematic way from oneâs beliefs and preferences. âŠď¸
Somewhat relevantly, Darwall writes: âEpistemology has an irreducibly normative aspect, in so far as it is concerned with norms for belief.â âŠď¸
We could further divide subjective normativity up into, roughly, âwhat one should do based on what one actually believesâ and âwhat one should do based on what it would be reasonable for one to believeâ. The following quote is relevant (though doesnât directly address that exact distinction):
Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given oneâs evidence. When I use the term âcredenceâ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term âdegrees of beliefâ.
The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of âoughtâ, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But thereâs no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.
(I found that quote in this comment, where itâs attributed to Will MacAskillâs BPhil thesis. Unfortunately, I canât seem to access the thesis, including via Wayback Machine.) âŠď¸
It also seems to me that this âsubjective vs objectiveâ distinction is somewhat related to, but distinct from, ex ante vs ex post thinking. âŠď¸
adamShimi @ 2020-02-12T14:02 (+4)
Thanks for the effort in summarizing and synthesizing this tangle of notions! Notably, I learned about axiology, and I am very glad I did.
One potential addition to the discussion of decision theory might be the use of "normative", "descriptive" and "prescriptive" within decision theory itself, which is slightly different. To quote the Decision Theory FAQ on Less Wrong:
We can divide decision theory into three parts (Grant & Zandt 2009; Baron 2008). Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose. Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.
Because that was one way I think about these words, I got confused by your use of "prescriptive", even though you used it correctly in this context.
MichaelA @ 2020-02-12T17:45 (+1)
Very interesting. I hadn't come across that way of using those three terms. Thanks for sharing!