Morality Isn’t Objective
By Noah Birnbaum @ 2025-06-26T13:42 (+42)
In response to Matthew's post about the objectivity of morality, I'd thought I'd throw out a(n initially -- this became much longer than I was expecting) short post explaining why I think this view is pretty implausible.
I have another post on my Substack that shows why I think some other arguments for moral realism fail (though I'm not sure I endorse everything I argue there anymore...). If you like this one, check it out!
Note: The formatting of this article got a little messed up (inconsistent about ordering with the numbered and lettered bullets, etc), but it should still be understandable and fairly readable. Also, I didn't spend much time editing because I wanted to get this out so apologies in advance.
- Evolutionary debunking arguments: I see there being two main versions of these arguments:
- Even if moral realism were true, there is no causal connection from the facts about morality to our beliefs about them (because our beliefs about morality stem from cultural and biological evolution); therefore, we shouldn't think our beliefs about the facts are reliable. (See Street)
- We can explain all of the data without positing the existence of these facts by appealing to already known facts about evolutionary game theory -- societies who cooperate are more likely to survive, etc. Not only does it predict the types of behavior we see, it often predicts the exact way that we expect to see it (i.e. people are more likely to behave altruistically in front of others, people care more about their family, people care more about themselves than others, etc). (see Joyce) Since we can explain all the data without positing the existence of another substance, via Occam's razor (or some similar heuristic), we shouldn't posit the existence of special moral facts!
- In his article, Matthew says: "You could tell a similar debunking story about our belief in the law of non-contradiction. But I think in such cases, we just have to consider the plausibility of the belief and see that, even though they can tell a consistent story of how you come to mistakenly intuit some fact, their account is less plausible."
- Say we gave some ontology of the world and need to account for why people have a belief that there can't be contradictions. Unlike moral facts, I think this is the exact type of fact evolution would want you to know -- knowing whether there can be contradictions or not is actually really helpful for survival (unlike moral facts). The mere fact that you say that it comes from a source is insufficient for debunking - you must show that we have reason to think it comes from some unreliable source.
- "Like, suppose that I give the theory that everything in the world was created by a brain worm. You point out that that’s crazy—a brain worm being fundamental is very complicated, it can’t make the world. I say that the brain worm is fundamental and misleads you into thinking it’s complicated plus that complexity is a virtue plus that brain worms can’t create the world. I point out that people often are misled by brain worms. It’s true that I can tell an internally consistent story of how you come to be mistaken across the board, but the story is just not at all plausible. Same with the story in which all of our beliefs about morality are wrong—random side effects of blind evolution."
- I don't want to strawman you here, and I doubt you believe something like "evolution is as simple and/or plausible about brain worms." I think I see the point that you're making here more about the intuitiveness of the hypothesis being reason to believe it, which is a point I will come back to.
- "Or suppose that I try to debunk the existence of love. I note that it would be evolutionarily beneficial to think you’re in love because that aids in reproduction. Adding love to your ontology is an extra posit. While I could tell an internally consistent debunking story, one would need to evaluate its plausibility. And such a story wouldn’t be plausible—it would be very unintuitive, just like the debunking story of the anti-realist."
- I don't think that it's at all implausible that love is totally the creation of evolutionary pressures and not real. I actually think this is probably a widely held view on the forum - if you want to posit otherwise, you're gonna have to start with a much more primordial belief in epistemology rather than appealing to some much 'higher-level' intuition that people have. I also don't know what it means to "evaluate it's plausibility" here aside from evaluating how intuitive it is, which I don't think is a reliable method to attain truth.
- "Now, is it true that our evolutionary beliefs are the byproducts of blind chance so that it would be a huge coincidence if they were true? No, I don’t think so. Here’s my account of how we have true moral beliefs: evolution makes us super smart, and then we figure out the moral truths. This is the same way we come to have true beliefs about modal facts, logical facts, mathematical facts, and so on. There’s no special challenge for moral facts (now, I think us having such rational capacities is surprising on atheism, but to account for how we know tons of other things, we should already grant that we have those rational capacities even if we’re atheists)."
- I don't think we come to facts about math or logic this way -- I think cognitive science has a pretty good (approximate) story of how we come to those beliefs about math, which I'm not gonna go into here (though I do discuss this a bit more in part a of the intuitions aren't sufficient section).
- However, I'll bite: (there's more to say, but I'm simplifying for the sake of this post) imagine that you wake up one day, and you see that some of our beliefs about higher level math don't continue to reflect what we see in the empirical world. In fact, they contradict what we see. Would you still think that they reflect the truth? I think you shouldn't.
- Separately, imagine that we didn't have the intuitions we do about some higher level math, but we see empirically that they follow all these rules that we can start to get a grasp on and make derivations from. In this case, should we think that math reflects reality? I think yes.
- Given that we've shown both that (ii) intuition alone isn't sufficient to believe in the existence of math and (iii) empirical observation alone is sufficient for reliable beliefs, we shouldn't believe that we have knowledge of math and logic by intuition alone! Similarly, having knowledge of morals via intuition alone isn't sufficient.
- In his article, Matthew says: "You could tell a similar debunking story about our belief in the law of non-contradiction. But I think in such cases, we just have to consider the plausibility of the belief and see that, even though they can tell a consistent story of how you come to mistakenly intuit some fact, their account is less plausible."
- Moral motivation: One part of the moral realism debate that always befuddles me is moral motivation (some part of this is called the internalism vs externalism debate, but I'm not interested in going into the semantics of this).
- Suppose we grant that there were these things in the world that were just morally true (say, unnecessary killing is bad). I still don't think that this means they should be action guiding. Why is this? Well, suppose there was a god, and he (yes, this god is a he) said that people have an obligation to do the Macarena every Tuesday. In addition, everyone knows that god believes and says that we should do this. However, this type of god also has no enforcement power (everybody also knows this) -- in other words, he's not going to do anything if you don't do what he says (he won't send you to hell, give you karma (on the EA Forum), make your life shorter, or anything else). I would argue that we have no reason to do what god says. Similarly, even if facts about morality existed, I still don't get (I do get it, but this is rhetorical) why people should be motivated to follow those facts.
- To pump this intuition further, imagine a psychopath who only cares about themself. Is there really something irrational about what they are doing (i.e. that he is not using his best information properly; not that we can define being moral as rationality and then say he is being irrational)? Sure, we don't prefer it, but I'm not sure we can say that whatever faculty he doesn't have/ doesn't motivate him is actually a failure of rationality of any kind. What would you say to them to make them a realist? It seems like they merely have a different psychological propensity, and there is not much that can be done to make them intrinsically motivated by the moral facts.
- Now imagine the psychopath but for math (as in, they don't have the intuitions we do about math) -- I think we would be able to convince them about much of the math that we do today on the basis of empirical observation and maximizing whatever goals they value. This, I believe, is different in the case of morality: in what sense are the moral facts helpful for goal maximization like the mathematical facts are (which are instrumentally important for understanding the world!) such that we'd be able to convince those without the faculties of its truth? (Note: my guess it that they would likely even accept mathematical principles that aren't clearly subject to empirical investigation--like non-euclidean geometry-- because the usefulness of following these mathematical principles creates a good meta-inductive argument that they would generalize to empirical phenomena that have not been tested yet, but this is a side point).
- Intuitions aren't sufficient: Much of the initial argument for moral realism comes from the strong intuitions we have about a bunch of moral claims (i.e. unnecessarily torturing babies is actually bad, we can say that Hitler was doing something wrong, etc). However, unlike Matthew and others, I don't think that intuitions alone are a very good basis for epistemology.
- Cognitive science suggests that the reliability of our intuitions depends heavily on the presence of consistent empirical feedback. In domains where such feedback is weak or absent, our intuitions are less trustworthy, and we should be more skeptical of relying on them without external calibration or formal reasoning. Connection with the moral facts is not something that we have much empirical feedback to (if any), so we shouldn't trust out intuitions about it.
- I think you need to be able to tell a story about not only having intuitions but also why they're reliable. However, I have yet to see a scientifically/ evolutionarily/ empirically informed story that makes me think that we would have reliable morally-truth-tracking intuitions. This is related to a general critique I have for the moral realists: namely, what is the causal story one has about how we get from the truth of the moral item to our accurate beliefs about them?
- A lot of the reason that we can trust our intuitions about math or the empirical world is because of the "No Miracles Argument" from the philosophy of science. The "no miracles argument" is a philosophical argument for scientific realism, suggesting that the success of scientific theories, particularly their predictive power, is best explained by their approximate truth. If our best scientific theories were not even close to the truth, the argument goes, their success would be an improbable coincidence, a "miracle."
- Similarly, if these theories weren't capturing truth, we would expect to see cases where science doesn't work all the time. The fact that it's consistent and understandable likely implies that there's probably some underlying truth that we're capturing.
- These same types of great arguments for scientific realism, however, don't hold in the case of intuitions alone -- we have inconsistencies all the time such that we shouldn't be so shocked when we learn that intuitions aren't the most truth-tracking process of reasoning.
- Convincing Hunter-Gatherers:
- If Matthew (or any other moral realist) went back in time, I think he would have quite a hard time trying to convince some early hunter gatherers with very different psychological dispositions to care about the out-group (though it would be funny to see him try to get them to donate to shrimp welfare, and I would pay lots of money to watch this).
- Aside from the fact that he can't speak their language, I think he won't be able to convince them because a lot of his (and my, to be fair) intuitions about the expansion of the moral circle come from psychological dispositions from historically contingent circumstances (for instance, a lot of beliefs about egalitarianism probably come from Capitalism, Christianity, and other historical factors). One can complain about the arbitrariness of certain moral principles all they want (about how they're inconsistently applied, can be dutch-booked, etc), but I still think that the sorts of arguments he might make will only be susceptible to those with certain psychological dispositions (especially ones towards systematizing). Even if these dispositions are correlated with being correct about a bunch of other things (i.e. science, math, etc), this is still insufficient to show that these intuitions are truth-tracking.
- Presumably, a moral realist would argue that these hunter-gatherers are just being irrational, and there are some facts that they either know or aren't taking seriously. While this is definitely true for some other things in this case (science probably, etc), I find this hard to believe in the case of morality. What are the facts they are missing? Can one point to the cognitive faculties that they are missing? Can we test this out?
- Once there is disagreement and no clear way, in principle, to evaluate who is correct in a certain domain, I think we should give up on believing that there are facts about the matter. If I kept arguing that chocolate was the best flavor and my friend kept arguing that it was vanilla and neither of us could rationally convince the other despite the deep intuitional truths in both directions, I think we should give up and think that there is no right answer. Similarly, if there is moral disagreement with no clear way to resolve it, we should think that there are no facts about morality!
- Anticipating different experiences: When disagreement arises about a philosophical position, I often think that it is useful to ask what the two positions actually differently anticipate about the world (Note: this is not to say that all disagreements can be cashed out in these terms; I am merely making the point that this is often useful). Let's go into some ways that moral realism, then, might cash out:
- All else being equal (i.e. without incentives to form a social contract, cooperate due to repeated games, etc), the orthogonality thesis is false; as agents get smarter, their beliefs converge on the moral facts. This includes AGI. I look forward to testing out this empirical claim, and I would be quite surprised if it holds.
- In our evolutionary history, in addition to us learning more about math/ logic/ other stuff via pattern matching, there should be some evidence that we learned to generalize about morality in similar ways that track the moral truth. It also wouldn't be sufficient to show that the generalizations we make from math/ other types of reasoning alone started to apply to morality -- we would need to give some plausible story of how we achieved these beliefs.
- In favor of future Tuesday indifference: It's much easier to say that moral facts exist if one could say that there are rationality constraints on preferences (ways that one's preferences can be irrational from a perspective that is not merely from their own preferences). Once one accepts this, it becomes easier to talk about good and bad outside of one's preferences (a position which I don't think makes much sense). But here's my crazy take: I don't think there are rationality constraints on preferences. Famously, Derek Parfit thought this was false and his argument goes as follows: imagine there is a man who walks into the doctor's office to get a surgery on Tuesday, and the doctor tells him "oh, sorry man. We won't get the anesthetic until tomorrow." The man then replies "that's totally fine -- I can still do it today." the doctor, looking considerably confused, says "I don't think you understand; this surgery is incredibly painful, and without anesthetic, you will suffer intense agony for many hours." The man once again calmly responds "nah, it's fine. I'm future Tuesday indifferent, and there will be some pain tomorrow vs immense pain today, but I am totally indifferent to the pain today. Therefore, I'd prefer to endure the pain today." "Whatever you say," the doctor says. He goes throughout the surgery and is screaming in agony and regret the entire time. The question now is: do we think the man did something irrational.
- My answer: sorta; I mean, it depends on the case, right? If we're talking about someone who is merely saying that they have future tuesday indifference but internally doesn't (as in, someone's preferences do not actually align with what they claim/ do; think about the irrationality of the heroine addict), I would say that they are not being rational (insofar as they'd like to accomplish their goals, they shouldn't make decisions as if they are future tuesday indifferent when they are not). However, if the man is actually indifferent (and therefore has a very different psychological profile than the rest of us), I'm not sure on what grounds one can claim that this person is irrational.
- I think, oftentimes, our intuitions about these matters go wrong because we over-extrapolate from our own preferences and the preferences of people around us. For instance, we probably wouldn't say that a robot who was programmed in such a way to respond like this but who's reward function was actually indifferent to pain on Tuesday's is being irrational. Similarly, in the case of a very foreign alien. Similarly, we should think of the guy with future Tuesday indifference as someone who merely has a very strange psychological disposition or is confused about their own preferences. Once we start viewing it like this, I think, it becomes much harder to talk about the irrationality of their preferences.
- For those that think one can't be wrong about their own preferences, imagine the following case: a poor and depressed girl named Tara grows up being told that money will solve all her problems. Tara then becomes extremely wealthy early on and starts to feel happy again. Tara, however, becomes so addicted to money that she starts sacrificing her happiness for money; she works long hours, spends no time with family, and is totally depressed. Here, you probably wouldn't say that her actual preference is to like money more than happiness and that she is just being rational with respect to her preferences; more likely, you'd say that she is confused about her true preferences.
On the other hand, there are a few arguments that I do take pretty seriously (that Matthew didn't go over) Here are a few:
- Deliberative Indispensability: David Enoch has what I take to be, one of the best arguments against anti-realism; it is called the deliberative indispensability argument.
- Before going into the argument itself, one should first understand the companions in guilt argument, which is doing some behind the scenes work in the deliberative indispensability argument. This is a common argument across philosophy, but Enoch makes this argument for moral realism by accepting normative realism. The structure of the argument goes as follows:
- Suppose you say that theory X is implausible because it has trait b. However, you also believe theory Y which also has trait b. In this case, you should either accept theory X and think that trait b is not defeating in itself or reject both theories on the grounds that trait b is defeating. To apply the trait b as a defeater in one case and not another, the argument goes, is applying a rule inconsistently and is therefore epistemically intolerable.
- Enoch argues that to deliberate about anything requires some form of normative realism. How does this work? I think this argument is best explained in dialogue format.
- David: grabs a bottle of water.
- Sarah: Oh David, I didn't know you were a normative realist?
- David: Wait, what? I'm not; I'm just drinking water.
- Sarah: Why are you drinking water?
- David: Because I'm thirsty.
- Sarah: Why do you care about thirst?
- David: Uhhh, because otherwise I would die.
- Sarah: Okay. Why should you not die?
- David: I don't prefer it.
- Sarah: Why do you act in accordance with your preferences?
- David: Because I like them... idk... can you stop...
- Sarah: Who cares if you like them? Can't you just do the thing that you exactly don't like?
- David: Yea, but I wouldn't like that.
- Sarah: That's circular.
- David: Idc. I like it and of course I should do my preferences; if not for my preferences, what else should I do?
- Sarah: So you are a normative realist!
- David: What? When did I say that?
- Sarah: You said you should do your preferences because that's the only way to ground why you should do your preferences as opposed to the exact opposite.
- (Maybe one can respond) David: I am merely making a descriptive claim. I am merely observing (rather than prescribing) that I will tend to take actions that achieve my preferences.
- Sarah: I think this is false. You could take actions that aren't in line with your preferences. Watch! Sarah bangs her head on the Table. It then seems like you are implicitly smuggling in a form of normativity whenever you deliberate!
- Once you accept this form of normative realism, the argument goes, you are subject to the same kind of evolutionary debunking arguments as the moral realist. In order to deliberate, then, one must admit that there is another kind of explanation (a normative one) associated with forms of normative realism that evolution itself can't merely debunk. Once you accept this kind of explanation, you are then admitting that evolutionary debunking arguments are actually not sufficient to reject an argument. If you accept that debunking arguments of this sort aren't sufficient, you shouldn't think that they are sufficient in the moral case.
- While I think this argument is pretty good, I think I get off the train at believing that any form of normative realism is required for deliberation. When I say that I do my preferences because I like them, I still think this is referring to a merely descriptive claim: I just tend to do my preferences. My sense is when one gets this deep into the "why did you choose that x thing as opposed to that y thing" language, it actually becomes the free will debate (Sarah is essentially asking if there are some things, on a deep level, which I can choose and are not merely descriptive claims), which I reject (I will not go into the reasons here).
- The reason I still think this is a good argument is because it seems hard to exactly hone in on when this debate becomes about free will vs something else (if ever), though I should probably spend more time thinking about it.
- While I think this argument is pretty good, I think I get off the train at believing that any form of normative realism is required for deliberation. When I say that I do my preferences because I like them, I still think this is referring to a merely descriptive claim: I just tend to do my preferences. My sense is when one gets this deep into the "why did you choose that x thing as opposed to that y thing" language, it actually becomes the free will debate (Sarah is essentially asking if there are some things, on a deep level, which I can choose and are not merely descriptive claims), which I reject (I will not go into the reasons here).
- The Normative Realist's Wager: Say there are two ways that the world could be: one where moral realism is true and one where it is false. Given that we're in a world where moral realism isn't true, there is no value associated with either acting as though moral realism is true or acting as though it's false. On the other hand, if we're in the world where moral realism is true, there is negative value in the world where we act like moral realism isn't true and positive value in the world where we act like it is true. Regardless of the world, then, it is as least as good (if not better) to act as though moral realism is true. (Note: While this argument resembles Pascal's wager, I don't think it is subject to many of the classical and good counter arguments: the many gods objection, pascal's mugging, the mixed strategies objection, the greater gods objection, etc).
- For any set of objective values associated with action x, one can make a set of objective values associated with y, where y is the exact opposite action of x. For instance, one action (x) is that killing babies is bad, and the other action (y) is that killing babies is good. Therefore, we don't actually have dominance, and there are an equal number of worlds where killing babies is both good and bad.
- However, I don't find this objection plausible. Presumably, one's p(that it's bad to torture babies) > p(it's good to torture babies), even if only slightly (because of religion, deference to experts, etc). If this is the case, while you don't get weak dominance (as there are instances where acting as though moral realism is true for one action can be bad in another world), there are more worlds where not killing babies produces more value, giving you greater reason to act in accordance with it.
- Pascal's mugging.
- Your credence shouldn't be low enough for pascal's mugging to actually kick in. Most philosophers are moral realists - under almost all reasonable theories about deference to experts, your credence should then be well above the level where you start avoiding things for pascal's mugging reasons.
- This decision calculus already assumes some normative realism to begin with. It assumes, in some sense, that we ought to use the decision process with the highest expected value/ we should take actions that dominate in game theoretic terms. However, this is already assuming some form of normative claims, making it a circular argument (at least for those that don't already buy normative realism -- that we should act in ways that maximize expected value or that we ought to take certain actions that dominate others).
- When we are talking about making bets that maximize expected value or actions that dominate others, one needs probabilities and values that pay-off. In the moral case, however, it's hard to think about what exactly the pay-off you get out acting in accordance with the moral truth. Does it give you points in the moral dimension? Does God give you reward after death? Etc. Given that there are no actual payoffs, one might argue, there is no dominance when acting in accordance with moral realism.
- What if there are other forms of value outside of moral realism that are not being accounted for in this setup (like constructive value, which some philosophers buy). While one can say that some form of objective value is better in EV terms because it is objective, it's not actually clear how to cash this out or even talk about how one of these values could be 'more important' than another. Being able to compare these different sorts of values gives moral uncertainty vibes, which is infamously quite difficult to deal with. (the awesome Joe Carlsmith makes this argument better than I do here, which you can read here).
- For any set of objective values associated with action x, one can make a set of objective values associated with y, where y is the exact opposite action of x. For instance, one action (x) is that killing babies is bad, and the other action (y) is that killing babies is good. Therefore, we don't actually have dominance, and there are an equal number of worlds where killing babies is both good and bad.
This is not to say that I'm not EA nor is it to say that I don't think others should be EA. In fact, quite the opposite; I run UChicago EA, plan to choose my career around EA principles, am vegan, and have made many other life sacrifices because of EA. While my anti realism has some influenced some of my normative ethics, I don't think my beliefs about cause prioritization actually differ that much from the average realist EA because of it. What I would say, however, is that this form of anti realism might imply EA os more of a preference than some objective truth; and I think this is totally fine! This doesn't mean that you can't or shouldn't convince others to be EA as well -- we argue with people about what follows from their stated preferences about how the world should be all the time! We need this type of reasoning for all types of institutions (designing markets, political institutions, etc). You can still think that most of the normal population underrate that they are always making trade-offs, have scope neglect, and that there are good arguments for thinking that, given peoples' values, wellbeing is by far the most important (or the only) value.
If anyone wants to share ideas more (constructive, critique, tell me how awesome these arguments are, etc) about this topic (especially on a call!), email me at dnbirnbaum@uchicago.edu, and we can set up a time! Also, as mentioned before, read my Substack!
LanceSBush @ 2025-06-27T17:36 (+2)
Thanks for writing this. I'm an antirealist and already agree with your conclusions, though I think we may arrive at similar conclusions for somewhat different reasons. I saw that you referenced a few other arguments for moral realism that Bentham didn't present in the post. At least one criticism I'd make of Bentham's argument for realism (at least as presented in the EA forum) is how narrow it is. There are plenty of arguments out there Bentham could have made but simply didn't. In any case, I don't think any other arguments for moral realism are any good, including the ones you mention.
I don't think Enoch's indispensability or wager arguments are persuasive at all. I think Kane B raises some excellent concerns with it in this video, towards the end (though the rest is worth listening to for greater familiarity with the argument). With respect to the form of argument presented in dialog form here, I think the mistake occurs here:
David: Idc. I like it and of course I should do my preferences; if not for my preferences, what else should I do?
I wouldn't endorse that I "should" act on my preferences. This already involves a notion of normativity that I reject and that I see no good reason to accept. Even if I should, at least as described, whether or not I should would be dependent on my preferences so this wouldn't get you to realism. Not sure how much this threatens Enoch's argument as he presents, it, though. He's concerned with minimizing arbitrariness and several other things you don't explicitly address. I don't think any of these other considerations favor realism either, though.
Regarding wagering on moral realism, I've addressed this at length here; incidentally, I do so in response to Bentham presenting this argument. I've also addressed it more recently here. I don't think wager style arguments are very good at all, and I think antirealists have pretty straightforward ways to reject them.
Noah Birnbaum @ 2025-06-27T17:42 (+2)
Ok, cool. Thanks for the comment and thanks for the recs - I’ll check them out!
AngleDance @ 2025-06-27T16:44 (+1)
"I don't think that it's at all implausible that love is totally the creation of evolutionary pressures and not real. I actually think this is probably a widely held view on the forum - if you want to posit otherwise, you're gonna have to start with a much more primordial belief in epistemology rather than appealing to some much 'higher-level' intuition that people have. I also don't know what it means to 'evaluate it's plausibility' here aside from evaluating how intuitive it is, which I don't think is a reliable method to attain truth."
This one reply suplexes 99% of what Matthew writes about meta-ethics.
idea21 @ 2025-06-26T22:20 (+1)
Moral evolution is a viable hypothesis and appears to have two easily understandable tendencies: the control of aggression and the promotion of cooperation. There seems to be evidence that there are comparatively some societies that are less aggressive and more cooperative than others. It is in this evolutionary context that we can contemplate an "objective morality." We do not know to what extent cultural (or civilizational) constraints can control aggression and promote cooperation, but it would be absurd to assume that we have reached the limit today.
What we do know is that, of all cultural constraints, the most effective are those that psychologically affect the individual's motivation when interacting with their peers: autonomous morality. Moral principles are emotionally internalized until they impact the "sphere of the sacred," where moral emotional reactions are equivalent to the force of instinct ("culture is the control of instinct," wrote Freud).
There will be no more effective altruism than that which manages to develop these civilizational possibilities in the sense of an altruistic economy.
Joseph_Chu @ 2025-06-26T14:29 (+1)
So, regarding the moral motivation thing, moral realism and motivational internalism are distinct philosophical concepts, and one can be true without the other also being true. Like, there could be moral facts, but they might not matter to some people. Or, maybe people who believe things are moral are motivated to act on their theory of morality, but the theory isn't based on any moral facts but are just deeply held beliefs.
The latter example could be true regardless of whether moral realism is true or not. For instance, the psychopath might -think- that egoism is the right thing to do because their folk morality is that everyone is in it for themselves and suckers deserve what they get. This isn't morality as we might understand it, but it would function psychologically as a justification for their actions to them (so they sleep better at night and have a more positive self-image) and effectively be motivating in a sense.
Even -if- both moral realism and motivational internalism were true, this doesn't mean that people will automatically discover moral facts and act on them reliably. You would basically need to have perfect information and be perfectly rational for that to happen, and no one has these traits in the real world (except maybe God, hypothetically).
Noah Birnbaum @ 2025-06-26T14:59 (+2)
Thanks for the comment!
Yep, in the philosophical literature, they are distinct. I was merely making the point that I'm not sure one of these (moral realism is true but not motivating) actually reflects what people want to be implying when they say moral realism is true. In what sense are we saying that there is objective morality if it relies on some sentiments? I guess one can claim that the rational thing to do given some objective (i.e. morality) is that objective, but that doesn't seem very distinct from just practical rationality. If it's just practical rationality, we should call it just that - still, as stated in the post, I don't think that we can make ought claims about practical rationality (though you can probably make conditional claims; given that you want x, and you should do what you want, you should take action y). Similarly, if one took this definition of realism seriously, they'd say that moral realism is true in the same way that gastronomical realism is true (i.e. that there are true facts about what food I should have because it follows from my preferences about them).
Also, I'm not sure I buy your last point. I think under the forms of realism that people typically want to talk about, theres a gradient to your increased morality as you increase rationality (using your evidence well, acting in accordance with your goals, ect). While you could just say that morality and motivation towards it only cashes out at the highest level of rationality (i.e. god or whatever), this seems weird and much harder to justify.
Joseph_Chu @ 2025-06-26T15:32 (+1)
You could argue that if moral realism is true, that even if our models of morality are probably wrong, you can be less wrong about them by acquiring knowledge about the world that contains relevant moral facts. We would never be certain they are correct, but we could be more confident about them in the same way we can be confident about a mathematical theory being valid.
I guess I should explain what my version of moral realism would entail.
Morality to my understanding is, for a lack of a better phrase, subjectively objective. Given a universe without any subjects making subjective value judgments, nothing would matter (it's just a bunch of space rocks colliding and stuff). However, as soon as you introduce subjects capable of experiencing the universe and having values and making judgments about the value of different world states, we have the capacity to make "should" statements about the desirability of given possible world states. Some things are now "good" and some things are now "bad", at least to a given subject. From an objective, neutral, impartial point of view, all subjects and their value judgments are equally important (following the Principle of Indifference aka the Principle of Maximum Entropy).
Thus, as long as anyone anywhere cares about something enough to value or disvalue it, it matters objectively. The statement that "Alice cares about not feeling pain" and its hedonic equivalent "Alice experiences pain as bad" is an objective moral fact. Given that all subjects are equal (possibly in proportion to degree of sentience, not sure about this), then we can aggregate these values and select the world state that is most desirable overall (greatest good for the greatest number).
The rest of morality, things like universalizable rules that generally encourage the greatest good in the long run, are built on top of this foundation of treating the desires/concerns/interests/well-being/happiness/Eudaimonia of all sentient beings throughout spacetime equally and fairly. At least, that's my theory of morality.
Noah Birnbaum @ 2025-06-26T15:55 (+2)
I think I get the theory you're positing, and I think you should look into Constructivism (particularly Humean Constructivism, as opposed to Kantian) and tethered values.
On this comment: Once you get agents with preferences, I'm not sure you can make claims about oughts -- sure, they can do the preferences and want them (and maybe there is some definition of rationality you can create such that it would be better to act in certain ways relative to achieving one's aims), but in what sense do they ought to? In what sense is this objective?
I'm also not sure I understand what a neutral/ impartial view means here, and I'm not understanding why someone might care about what it says at all (aside from their mere sentiments, which gets back to my last comment about motivation).
Also, I don't understand how this relates to the principle of indifference, which states that given some partition over possible hypotheses in some sample space and no evidence in any direction, you should assign an equal credence to all possibilities such that their total sums to one.
Joseph_Chu @ 2025-06-26T16:07 (+1)
I guess the main intuitional leap that this formulation of morality takes is the idea that if you care about your own preferences, you should care about the preferences of others as well, because if your preferences matter objectively, theirs do as well. If your preferences don't matter objectively, why should you care about anything at all?
The principle of indifference as applied here is the idea that given that we generally start with maximum uncertainty about the various sentients in the universe (no evidence in any direction about their worth or desert), we should assign equal value to each of them and their concerns. It is admittedly an unusual use of the principle.
Manuel Del Río Rodríguez 🔹 @ 2025-06-26T16:19 (+2)
I find the jump hard to understand. Your preferences matter to you -not 'objectively'. They just matter because you want x, y z-. It doesn't matter if your preferences don't matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference... I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there's no reason at all to grant them and their concerns equal value to yours a priori.
Joseph_Chu @ 2025-06-26T17:41 (+5)
I mean, that innate preference for oneself isn't objective in the sense of being a neutral outsider view of things. If you don't see the point of having an objective "point of view of the universe" view about stuff, then sure, there's no reason to care about this version of morality. I'm not arguing that you need to care, only that it would be objective and possibly truth tracking to do so, that there exists a formulation of morality that can be objective in nature.
Manuel Del Río Rodríguez 🔹 @ 2025-06-26T19:08 (+2)
Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in 'realism' and 'objective'). I think my crux, given what you say, is that I indeed don't see the point of having a neutral, outsider, point of view of the universe in ethics. I'd need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of 'from nowhere' isn't automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they're useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn't make them 'true' in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.
SummaryBot @ 2025-06-26T13:51 (+1)
Executive summary: In this exploratory and informal post, the author argues that moral realism—the idea that moral facts exist independently of human beliefs—is implausible, primarily because evolutionary and empirical explanations better account for our moral intuitions, which lack the kind of reliability, feedback, and motivational force typically associated with objective truths.
Key points:
- Evolutionary debunking undermines moral realism: Our moral beliefs can be explained through evolutionary pressures and cultural evolution without invoking independent moral facts, suggesting such facts are unnecessary and epistemically unreliable.
- Intuitions alone aren’t enough: Moral intuitions lack consistent empirical feedback, making them a poor foundation for claims of objective truth—unlike intuitions in domains like math or logic, which are reinforced by empirical success and feedback.
- Moral facts lack motivational force: Even if moral facts existed, it’s unclear why they would motivate action—unlike instrumental knowledge (e.g., math), which can align with an agent’s goals and can be used to convince others.
- Deliberative indispensability and normative realism: The strongest argument for realism may be Enoch’s claim that deliberation presupposes normative realism, but the author resists this, interpreting preferences and deliberation as descriptively rather than normatively motivated.
- Empirical and decision-theoretic tests of realism fall short: The author is skeptical of empirical predictions (e.g., smarter agents converging on moral truths) and wagers (e.g., the Normative Realist’s Wager), noting that such arguments often smuggle in realist assumptions.
- EA can still thrive under anti-realism: Despite rejecting moral realism, the author affirms commitment to EA principles, seeing them as a strong expression of preferences and values rather than objective moral truths, and argues this framing still supports persuasion and institutional design.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.