Debate: Morality is Objective

By Bentham's Bulldog @ 2025-06-24T15:35 (+105)

 

There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.  

Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.  

Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error.

So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says:

  1. You’re doing something wrong.
  2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought).

Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true:

  1. It’s neither true nor false.
  2. It’s false.
  3. It’s true but stance dependently—so it depends on what someone thinks about it.

But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it. It seems very weird to think that what makes it wrong to torture people is what someone thinks about it—even weirder that statements like “torture is wrong,” are neither true nor false.

The view that these statements are neither true nor false has unique linguistic problems. Proponents claim that moral sentences are like commands—they’re not even in the business of expressing propositions. If I say “shut the door,” or “go Dodgers,” that isn’t either true nor false. But because of that, it makes no sense to ask “go Dodgers?” or “is it true that shut the door?” Similarly, it makes no sense to say “if shut the door then shut the door now, shut the door, therefore, shut the door now.” But it does make sense to say things like “is abortion wrong?” or “if murder is wrong, then so is abortion.” This shows that moral statements are, at least in many cases, in the business of expressing propositions—asserting things supposed to be true or false.

Now, there are all sorts of tricky ways people modify the view that moral sentences are neither true nor false to get around these counterexamples. I can’t discuss them in detail, but I can only say that they tend to be very gerrymandered and ad hoc, and while perhaps they can affirm the same sentences moral realists say, they don’t agree with the meanings. They’re analogous to religious liberals who say things like “God exists, but by that I mean that there’s love in the world.” Worse, they imply statements like “torture is wrong,” are neither true nor false. But they seem true!

Denying objective morality is counterintuitive in a second, very different way. If there are stance-independent reasons—reasons to care about things that don’t depend on what you actually care about—then moral realism is almost definitely true. Once that anti-realist admits there are reasons to care independent of your desires, it seems those reasons should give rise to moral reasons. If I have a reason to prevent my own suffering, it seems that suffering is bad, which gives me a moral reason to prevent it.

But this means that moral anti-realists must think that you can never have a reason to care about something independent of what you actually do care about. This is crazy as shown by the following cases:

  1. A person wants to eat a car. They know they’d get no enjoyment from it—the whole experience would be quite painful and unpleasant. On moral anti-realism, they’re not being irrational. They have no reason to take a different action.
  2. A person desires, at some time, to procrastinate. They know it’s bad for them, but they don’t want to do their tasks. On anti-realism, this is not a rational failing.
  3. A person wants to torture themselves. They have this desire—despite getting no joy from it—despite knowing the relevant facts. On anti-realism, they’re not being irrational.
  4. A four-year-old wants a cookie to be shaped like a triangle. They are willing to endure great future agony for this. On anti-realism, they’re not being irrational—so long as they’re informed about the relevant facts.
  5. A person has a very strong desire to be skinny. This motivates them to starve to death—leaving behind a life of joy and fulfillment. On anti-realism, one has no reason to not to do this. It might be bad, but one can’t claim that they’re acting foolishly.
  6. A person is depressed and cuts themself. When they do it, they are fully informed about the long-term consequences. On anti-realism, they are not acting irrationally.

This is all completely nuts! We take it as a totally ordinary assumption in normal life that there are some things that aren’t worth pursuing—that one is a fool to pursue. Anti-realism can’t maintain that obvious intuition. We call people mentally ill when they have certain aims, even when informed of the relevant facts, because we recognize it’s a sign of irrationality!

Okay, so far I’ve argued that moral anti-realism implies things that are really counterintuitive. It implies things that seem false when you think about them. But is this a problem? Anti-realists often admit that their position is counterintuitive, but think this isn’t a defect. The facts, after all, do not care about your feelings.

But I think this gets wrong how we come to know things. Consider the belief that, say, the law of non-contradiction is true. How do we know that? Or the belief that if space isn’t curved the shortest distance between two points is a line. Or even the belief that there’s an external world.

The way we know these things is by relying on appearances. We think about the subject and it appears that, say, a thing can’t both be a way and not be that way at the same time in the same sense. Our foundational beliefs are justified on the basis of them seeming right.

Visual experience is a good analogy here. When I see a table, I think there really is a table. Because it appears that there’s a table, I think I’m justified in believing there to be one unless given a strong reason to doubt it. Could I be hallucinating? Sure! But unless given a reason to think that I am, I shouldn’t think so.

But just as there are visual appearances, there are intellectual appearances. Just as it appears to me that there’s a table in front of me, it appears to me that it’s wrong to torture babies. Just as I should think there’s a table absent a good reason to doubt it, I should think it’s wrong to torture babies. In fact, I should be more confident in the wrongness of torturing babies, because that seems less likely to be the result of error. It seems more likely I’m hallucinating a table than that I’m wrong about the wrongness of baby torture.

People often object to relying on intuitions. But I’m curious how they get their foundational beliefs. One’s most basic beliefs always seem justified by the fact that they seem right. Such people should explain how they know that the physical world exists, the laws of non-contradiction and identity are true, the greater is greater than the lesser, something can’t have a color without a shape, and that the cumulative case for either atheism or theism is better than the other without relying at all on how things seem.

Now, people point out that our intuitions conflict and are historically contingent. But intuitionists don’t say that intuitions are infallible or that we should never revise them in light of evidence. We say that intuitions are the starting point on which you build your beliefs, but that upon learning new things, you should still obviously update your beliefs. Showing intuitions go wrong in various cases tells us nothing about their general reliability. It would be like saying you can’t trust that there’s a table in front of you because people sometimes hallucinate.

Furthermore, it’s hard to see how, absent relying on intuition, people know that our intuitions really are ever wrong. For instance, a common class of intuition that we know is wrong is that we have weird views about probability—we often think that the odds of A and B are higher than the odds of A alone. But absent relying on intuition, how do you know that the odds of A and B aren’t higher than the odds of A alone? The critics of intuitions rely on intuitions to discredit them.

Moral realists aren’t special pleading. We believe in moral facts for the same reason that we believe in any other basic kind of fact.

Now, anti-realists have a bunch of arguments and I can’t address them all. But let me just address three of them.

The first common one is the argument from disagreement. People argue that because we disagree about morality, it can’t be objective. But this misunderstands what it means for something to be objective. Something objective is true and its truth doesn’t depend on what you think about it. It won’t necessarily be known by everyone.

There’s an objective fact about the right theory of physics, whether God exists, and even whether morality is objective. But those things generate plenty of disagreement. So disagreement can’t be enough to necessitate subjectivity. Now, there are more complicated ways of making the argument, but I don’t really think any of them stick. Lots of other domains have similar disagreement to moral realism while being squarely objective.

The second common argument against moral realism is the argument from queerness. This argument says that moral facts are super weird. They’re utterly different from anything else. For this reason, you shouldn’t beleive in them as they’re just too foreign and alien.

But the world has lots of weird things. Fields, epistemic facts, planets, energy, mathematical facts, propositions, particles, God, consciousness, and much more. Sure morality is different—it’s about what you should do—but all these things are different from the others. The world is filled with weird stuff, so I don’t know why moral facts’ weirdness would be disqualifying.

Furthermore, it’s unclear why moral realism is supposed to be weird. It doesn’t seem weird to me that some things are bad. I’ve never heard a good explanation of what about moral realism makes it so weird. It just seems to be a brute intuition—one that I don’t share.

The only decent explanation I’ve heard of what’s supposed to be weird about morality is that it’s non-natural. Moral facts aren’t made of atoms—they’re not part of the physical world. But, such objectors claim, all the things that exist are parts of the physical world. Therefore, moral facts would be a new, radically different sort of thing.

But I reject that all the stuff that exists is physical. I think there’s lots of non-physical stuff—modal facts, God, consciousness, souls, mathematical facts, logical facts, epistemic facts, and so on. Some of those are controversial, but others are pretty plausible.

Take modal facts, for instance. Those are facts about what’s possible and necessary. So, for example, the fact that a married bachelor is impossible is a modal fact. That’s not a physical fact—it’s not about the physical world. It would have been true even if there never had been a physical universe, and it was true before the universe. It’s not merely the claim that there are no married bachelors but that there can’t be any—that them existing is impossible. But that fact isn’t about the physical world.

Or take logical facts. Any argument with true premises of the form “if P then Q, P,” will have a true conclusion. That’s not a fact about the physical world. It didn’t start being true at the big bang. It’s a necessary truth, with similar status to the moral facts.

Finally, consider epistemic facts. These are facts about what it’s reasonable to believe—what you should believe. For example “it’s irrational to believe what’s opposed by the evidence,” or “it’s irrational to believe there are square circles just because you find them cool.” That is, once again, not a fact about the physical world. But it’s a true fact. Like moral facts, epistemic facts are about what you should do—in this case, what you should believe, what reason demands you believe. Those who reject moral realism would seem to also have to reject epistemic realism and thus think that a person who claims that they think moral realism is true because they like the idea isn’t being irrational.

The last major objection to moral realism is called the evolutionary debunking argument. This argument says that evolution shaped our moral beliefs. The reason that we believe that torture is wrong is because believing that was evolutionarily beneficial. But crucially, believing that being beneficial doesn’t depend on it being true—it would be just as beneficial if it was false. So if our moral beliefs are shaped by blind evolutionary processes, it would be a miracle if they turn out to be right.

But I think even in cases like this, where someone tells a just-so story about how you might come to mistakenly believe what you do about some subject, you still have to evaluate their plausibility. You could tell a similar debunking story about our belief in the law of non-contradiction. But I think in such cases, we just have to consider the plausibility of the belief and see that, even though they can tell a consistent story of how you come to mistakenly intuit some fact, their account is less plausible.

Like, suppose that I give the theory that everything in the world was created by a brain worm. You point out that that’s crazy—a brain worm being fundamental is very complicated, it can’t make the world. I say that the brain worm is fundamental and misleads you into thinking it’s complicated plus that complexity is a virtue plus that brain worms can’t create the world. I point out that people often are misled by brain worms. It’s true that I can tell an internally consistent story of how you come to be mistaken across the board, but the story is just not at all plausible. Same with the story on which all of our beliefs about morality are wrong—random side effects of blind evolution.

Or suppose that I try to debunk the existence of love. I note that it would be evolutionarily beneficial to think you’re in love because that aids in reproduction. Adding love to your ontology is an extra posit. While I could tell an internally consistent debunking story, one would need to evaluate its plausibility. And such a story wouldn’t be plausible—it would be very unintuitive, just like the debunking story of the anti-realist.

Now, is it true that our evolutionary beliefs are the byproducts of blind chance so that it would be a huge coincidence if they were true? No, I don’t think so. Here’s my account of how we have true moral beliefs: evolution makes us super smart, and then we figure out the moral truths. This is the same way we come to have true beliefs about modal facts, logical facts, mathematical facts, and so on. There’s no special challenge for moral facts (now, I think us having such rational capacities is surprising on atheism, but to account for how we know tons of other things, we should already grant that we have those rational capacities even if we’re atheists).

So if you think you know stuff about math—like that there are infinite prime numbers—then however you explain that will apply also to the moral domain.

Why should we accept this account? Well, mostly for the reason I explain above—that it’s the only way to make sense of our moral knowledge, which we have, as shown by the arguments given above. But furthermore, it’s a better explanation of our moral beliefs.

We believe lots of random things about morality that seem to have no clear evolutionary benefit. We believe that people on the other side of the world matter intrinsically as much as nearer ones (some people don’t but many do), that the better than relation is transitive (if A is better than B and B is better than C then A is better than C), that spatiotemporal location doesn’t affect one’s moral worth, that if A is wrong and B is wrong then doing A then B is wrong, and so on.

Many of these don’t plausibly enhance survival, and are niche and formal. This makes sense if we’re really figuring out the moral facts. In contrast, on anti-realism, you’d expect most of our moral beliefs to be geared towards survival—believing having many kids is obligatory. It would be surprising that many of the strongest intuitions—like the belief in transitivity—are things that are formal, non-emotional, and don’t plausibly directly enhance our survival.

Of course, I’d grant that many of our moral intuitions are affected by evolution. Evolution gives us many false moral inclinations, but those can be overcome by sufficient reflection. An analogy with mathematics is appropriate—we have some unreliable mathematical intuitions because of evolution, but we can still form many true mathematical beliefs by reflecting.

Finally—and I know this won’t move atheists, but just explaining my views—I reject the evolutionary debunking argument because I believe in God. If God exists and wants us to know the truth about morality, it makes sense that we’d have true moral beliefs and set up the world such that the evolutionary process produces us with true moral beliefs.

Moral anti-realism is certainly an internally consistent position. But it’s a very implausible one. It gives up many of the most obvious truth about the world—the stance independent wrongness of torture—on the basis of super lame arguments. Absent some extremely compelling reason to accept it, we should remain convinced that it’s false. Some things really are wrong.


Richard Y Chappell🔸 @ 2025-06-24T17:57 (+49)

[Vote explanation]: The most important reason for my favoring moral realism is my sense that some goals (e.g. promoting happiness, averting misery) are intrinsically more rationally warranted than others (like promoting misery and averting happiness).

In the same way that some things are true and worth believing, some things are good and worth desiring. We should ultimately find the notion of justified goals to be no more deeply mysterious than that of justified beliefs. To deny the objective reality of either goodness or truth would seem to undermine inquiry, and there's no deeply compelling reason to do so. (For one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!)

Peter Wildeford @ 2025-06-25T14:40 (+8)

You were negative toward the idea of hypothetical imperatives elsewhere but I don't see how you get around the need for them.

You say epistemic and moral obligations work "in the same way," but they don't. Yes, we have epistemic obligations to believe true things... in order to have accurate beliefs about reality. That's a specific goal. But you can't just assert "some things are good and worth desiring" without specifying... good according to what standard? The existence of epistemic standards doesn't prove there's One True Moral Standard any more than the existence of chess rules prove there's One True Game.

For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I don't know how you would get facts about which value system to have without appealing to a higher-order value system.

Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel "promoting happiness is obviously better than promoting misery," that doesn't strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.

I'm far from a professional philosopher and I know you have deeply studied this much more than I have, so I don't mean to accuse you of being naive. Looking forward to learning more.

Richard Y Chappell🔸 @ 2025-06-25T15:13 (+2)

It's an interesting dialectic! I don't have heaps of time to go into depth on this, but you may get a better sense of my view from reading my response to Maguire & Woods, 'Why Belief is No Game':

My biggest complaint about this sort of view is that it completely divorces reasons from rationality.  They conceive of reasons as things that support (either by the authoritative standard of value, or some practice-relative standard of correctness) rather than as things that rationalize.  As a result, they miss an important disanalogy between practice-relative "reasons" and epistemic reasons: violating the latter, but not the former, renders one (to some degree) irrational, or liable to rational criticism.

Of course, there are more important things than being rational: I'm all in favour of "rational irrationality" -- taking magic pills that will make you crazy if that's essential to save the world from an evil demon or the like.  But I still think its important to recognize rationality as the objective/"authoritative" standard of correctness for our cognitive/agential functioning.  It's really importantly different from mere practice-relative reasons, which I don't think are properly conceived of as normative at all.  There's really nothing genuinely erroneous (irrational) about playing chess badly in order to save the world, in striking contrast to the person who (rightly and rationally) turns themselves irrational in order to save the world.

So, whereas M&W are happy to speak of "chess reasons" as genuinely normative (just not authoritative) reasons, I would reject this on the grounds that chess reasons do not rationalize action. If the evil demon will punish us all if you play chess well, then you really have no good reason at all to play well.  (By contrast, if you're punished for believing in line with the evidence, that doesn't change what it is rational to believe, it just provides an overwhelmingly important practical reason to [act so as to] block or override your epistemic rationality somehow!)

... Surprisingly, M&W take "non-compliance" with "operative standards of correctness" to "render one liable to certain kinds of criticism", even if one has violated these non-authoritative standards precisely in order to comply with authoritative normative reasons, or what one all-things-considered ought to do. This claim strikes me as substantively nuts.  If you rightly violate your professional code in order to save the world from destruction, it simply isn't true that you're thereby "liable to professional criticism." (Especially if your profession is, say, a concentration camp guard.)  Anyone who criticized you would reveal themselves to be the world's biggest rule-fetishist. Put another way: conforming to the all-things-considered ought is an indisputable justification, and you cannot reasonably be blamed or criticized when you act in a way that is perfectly well justified.

Peter Wildeford @ 2025-06-25T15:36 (+3)

Thanks!

I think all reasons are hypothetical, but some hypotheticals (like "if you want to avoid unnecessary suffering...") are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.

The concentration camp guard example actually supports my view - we think the guard shouldn't follow professional norms precisely because we're applying a different value system (human welfare over rule-following). There's no view from nowhere; there's just the fact that (luckily) most of us share similar core values.

Richard Y Chappell🔸 @ 2025-06-25T20:28 (+6)

Do you think there's an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning don't seem any less metaphysically mysterious than objective norms for practical reasoning.

One could just debunk all philosophical beliefs as mere "deeply embedded... intuitions" so as to avoid "mysterious metaphysical facts". But that then leaves you committed to thinking that all open philosophical questions - many of which seem to be sensible things to wonder about - are actually total nonsense. (Some do go this way, but it's a pretty extreme view!) We project green, the grue-speaker projects grue, and that's all there is to say. I just don't find such radical skepticism remotely credible. You might as well posit that the world was created 5 minutes ago, or that solipsism is true, in order to further trim down your ontology. I'd rather say: parsimony is not the only theoretical virtue; actually accounting for the full range of real questions we can ask matters too!

(I'm more sympathetic to the view that we can't know the answers to these questions than to the view that there is no real question here to ask.)

Peter Wildeford @ 2025-06-26T01:18 (+11)

You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I'd argue this actually supports my view rather than undermining it.

The key difference: epistemic norms have a built-in goal - accurate representation of reality. When we ask "should I expect emeralds to be green or grue?" we're implicitly asking "in order to have beliefs that accurately track reality, what should I expect?" The standard is baked into the enterprise of belief formation itself.

But moral norms lack this inherent goal. When you say some goals are "intrinsically more rationally warranted," I'd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us - but that's because we're humans with particular values, not because we've discovered some goal-independent truth.

I'm not embracing radical skepticism or saying moral questions are nonsense. I'm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. "Is X wrong according to utilitarianism?" has a determinate, objective, mind-indpendent answer. "Is X wrong simpliciter?" does not.

The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.

So yes, I think we can know things about the future and have justified beliefs. But that's because "justified" in epistemology means "likely to be true" - there's an implicit standard. In ethics, we need to make our standards explicit.

Richard Y Chappell🔸 @ 2025-06-26T03:17 (+8)

Why couldn't someone disagree with you about the purpose of belief-formation: "sure, truth-seeking feels obviously correct to you, but that's just because [some story]... not because we've discovered some goal-independent truth."

Further, part of my point with induction is that merely aiming at truth doesn't settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).

To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed to be (what we call) blue. Surprising! Still, I take it that as of 2025, it was reasonable for us to expect future emeralds to be green, and unreasonable of the grue-speakers to expect them to be grue. Part of the challenge I meant to raise for you was: What grounds this epistemic fact? (Isn't it metaphysically mysterious to say that green as a property is privileged over "grue" for purposes of inductive reasoning? What could make that true, on your view? Don't you need to specify your "inductive standards"?)

moral questions make perfect sense once we specify the evaluative standard

Once you fully specify the evaluative standard, there is no open question left to ask, just concealed tautologies. You've replaced all the important moral questions with trivial logical ones. ("Does P&Q&R imply P?") Normative questions it no longer makes sense to ask on your view include:

  • I already know what Nazism implies, and what liberalism implies, but which view is better justified?
  • I already know what the different theories of well-being imply. But which view is actually correct? Would plugging into the experience machine be good or bad for me?
  • I already know what moral theory I endorse, but would it be wise to "hedge" and take moral uncertainty into account, in case I'm wrong?

And in the epistemic case (once we extend your view to cover inductive standards):

  • I already know what the green vs grue inductive standards have to say about whether I should expect future emeralds to be green or grue; but - in order to have the best shot at a true belief, given my available evidence - which should I expect?
Peter Wildeford @ 2025-06-27T11:33 (+6)

You're right that I need to bite the bullet on epistemic norms too and I do think that's a highly effective reply. But at the end of the day, yes, I think "reasonable" in epistemology is also implicitly goal-relative in a meta-ethical sense - it means "in order to have beliefs that accurately track reality." The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.

You say I've "replaced all the important moral questions with trivial logical ones," but that's unfair. The questions remain very substantive - they just need proper framing:

Instead of "Which view is better justified?" we ask "Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?"

Instead of "Would the experience machine be good for me?" we ask "Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?"

These aren't trivial questions! They're complex empirical and philosophical questions. What I'm denying is that there's some further question -- "But which view is really justified?" -- floating free of any standard of justification.

Your challenge about moral uncertainty is interesting, but I'd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That's still goal-relative, just at a meta-level.

The key insight remains: every "should" or "justified" implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We're not eliminating important questions - we're revealing what we're actually asking.

Richard Y Chappell🔸 @ 2025-06-27T15:17 (+4)

I agree it's often helpful to make our implicit standards explicit. But I disagree that that's "what we're actually asking". At least in my own normative thought, I don't just wonder about what meets my standards. And I don't just disagree with others about what does or doesn't meet their standards or mine. I think the most important disagreement of all is over which standards are really warranted. 

On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts. I think it's key to philosophy that there is more we can wonder about than just that. (There may not be any tractable disagreement once we get down to bedrock clashing standards, but I think there is still a further question over which we really disagree, even if we have no way to persuade the other of our position.)

It's interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.

LanceSBush @ 2025-06-25T03:13 (+6)

I am not sure there even are intuitions or seemings of the sort philosophers often talk about, but if I were to weigh in on the matter, I'd have the exact opposite reaction. I can think of few things more obvious than that it doesn't make any sense to think some goals are more rational or correct than others. Goals are just descriptive facts about agents. They don't even seem like an appropriate target of evaluation for such judgments. To me, this sounds like saying that someone's birthday is more rationally warranted.

I also don't see why denying the objective reality of goodness would undermine inquiry. Why would it? I act in pursuit of my goals. Inquiry is a means of pursuing my goals. I don't even think it makes sense to talk of things being objectively good, but even if there were objective goods, I would not care about them.

Regarding the last remark: that there's no "deeply compelling reason to do so," you go on to say "For one thing: in order for there to be a suitably objective normative reason, normative realism would have to be true!"

But "deeply compelling" is not, to my mind, identical to "objective." I don't believe I or anyone else needs or benefits in any way from having objective reasons to do anything. We can do things because we want to. We don't need any more "reason" (if desires could be construed as reasons) than that.

GideonF @ 2025-06-25T04:54 (+4)

So one way of thinking about this is as follows. Imagine you're goal is to eat every apple you see. I show you an apple. You acknowledge that it is in fact an apple, and you have seen the apple. I say you should then eat the apple. You refuse to eat the apple. My view is that you (epistemically) ought to have eaten the apple. There is a normativity about reasons (and logic) that suggest I am justified in saying this. If you reject normativity about epistemic reasons, it seems to me that you don't have to accept that you ought to have eaten the apple. Maybe there is something different about epistemic normativity than ethical normativity, or maybe there is something unique about epistemic normativity in the logical domain, but I'm not really sure what that special thing is.

Manuel Del Río Rodríguez 🔹 @ 2025-06-25T07:10 (+3)

I fail to follow the apple example. Why should I epistemically have eaten the apple? Either I have a true goal (and desire) to eat it or not. If I do, I will not refuse to eat it. If you assume it is a goal, I am assuming it is true, although people don't generally have those sorts of goals, I think. They look more like... lists of preferences and degree of each preferences. Some are core-preferences difficult to change, while others are very mutable. 

If by epistemic normativity you mean something like there are x, y, z reasons we should trust when we want to have proper beliefs about things, what I'd say is that this doesn't seem normative to me. I personally value truth very highly as an end in itself, but even if I didn't, truthful information is useful for acting to satisfy your desires, but I don't see why one has some obligation to do so.I f someone doesn't follow the effective means to their ends, they’re being ineffective or foolish, but not violating any norm. If you want a bridge to stand, build it this way; otherwise, it falls. But there’s no moral or rational requirement to build it that way - you just won’t get what you want.

LanceSBush @ 2025-06-25T13:09 (+1)

I don’t accept that I “ought to have eaten the apple.” At the very least, I wouldn’t accept this without knowing what you take that to mean. I don’t think there are any irreducibly normative facts at all, nor do I think there are any such thing as “reasons” independent of descriptive facts about the relation between means and ends. So I don’t know what you have in mind when you say that “you ought to have eaten the apple.” I also don’t know why you epistemically ought to have; why not prudential, or some other normative domain?

Could you perhaps explain what you have in mind by epistemic and moral normativity? There’s a good chance I don’t accept the account you have in mind.

Neel Nanda @ 2025-06-26T01:53 (+2)

What do you say to someone who doesn't share your goals? Eg thinks that happiness is only justified if it's earned, and that most people do not deserve it, as they do "bad thing X", and being against promoting happiness to them

Richard Y Chappell🔸 @ 2025-06-26T02:42 (+2)

Generally parallel things to what I'd say to someone with different fundamental epistemic standards, like:

  • I could be wrong about what's justified. (Certainly my endorsing a standard doesn't suffice to make it justified - and likewise for them. We're not infallible!)
  • Check whether their answer seems objectionably ad hoc in some way, fails to treat like cases alike, is in tension with other claims they accept, or rests on dubious presuppositions ("why think X is so bad?"), etc.
  • If we get to bedrock, neither of us will be able to persuade the other to change their mind. Still, we may each think that (at least) one of us must be mistaken about what's genuinely justified.
  • + we may at least identify some areas of overlap (e.g. it sure would suck if a clearly innocent individual were to suffer...)
LanceSBush @ 2025-06-25T03:39 (+31)

Morality is Objective

This is mostly a repost from Bentham's blog. I wrote an extensive rebuttal to that post here:

https://www.lanceindependent.com/p/moral-realist-quackery-another-response

Bentham also wrote an earlier, extensive post about moral realism. I offered a comprehensive critique of that here:

https://www.lanceindependent.com/p/benthams-blunder-full-post

I do not find Bentham's case for realism even a little persuasive. I think he relies extensively on questionable methods and presuppositions and does little to advance any compelling argument for moral realism.

I've become increasingly confident that there are no good arguments for moral realism, that it is deliberatively and explanatorily redundant, that we can account for all observations without positing moral facts, that naturalist accounts of moral realism are trivial and fail to achieve the aspirations of any interesting and robust account of anything and largely "succeed" by terminological gerrymandering, and that non-naturalism relies almost entirely on questionable appeals to "intuitions." 

Furthermore, non-naturalist realism routinely appeals to notions such as external reasons and irreducible normativity that may not even be intelligible, and, at any rate, their proponents are unable to provide a satisfactory account of what these concepts even mean.

Like Tyler John, I chose the strongest level of disagreement. Tyler says that he has has a high (>.99) credence in antirealism. I do, too. There are few positions I am more confident about than antirealism. I'd be happy to discuss the matter with anyone who disagrees here or elsewhere.

MichaelDickens @ 2025-06-25T19:02 (+12)

The tone of the beginning of this article—putting "quackery" in the title, the insulting opening line "Bentham's Newsletter is back at it with bad arguments for moral realism"—makes me think it's not going to give a fair assessment of the arguments. I didn't read it for that reason. If you want to persuade people like me, you should skip the insults.

LanceSBush @ 2025-06-25T19:14 (+5)

You'd be mistaken. If anyone thinks I said anything unfair in the article, they're welcome to point it out. I engage comprehensively, explicitly, and directly with Bentham's arguments for moral realism, and I do the same for many others. I don't think anyone would be able to build a compelling case that I an unfair to Bentham or to moral realists in general. Bentham, on the other hand, rarely engages comprehensively and tends to be dismissive of his better critics on this topic, ignoring them outright or only engaging in a superficial or perfunctory manner. 

See, for instance, my discussion just a few hours ago with Alex Malpass on my channel. I don't think you'd come away with the impression that I am unfair to moral realists. I do think, however, that if you looked over the kinds of remarks I routinely engage with from moral realists, you'll find many instances of moral realists being unfair towards moral antirealists. In fact, one of the primary things I do is document and respond to the constant misrepresentations of antirealism and the bad arguments directed against it. We antirealists are on the receiving end of the bulk of the unfair treatment, not moral realists.

David Mathers🔸 @ 2025-06-25T09:57 (+2)

Is being trivial and of low interest evidence that naturalist forms of realism are *false*? "Red things are red" is boring and trivial, but my credence in it is way above 0.99. 

LanceSBush @ 2025-06-25T13:12 (+3)

No, triviality doesn't necessarily entail falsehood. But you can trivially establish anything as true if you gerrymander language. For example, I can define "God" as "this chair over here," point to the chair, and then say "God exists." It would be true that "God exists," in the sense that I am using the terms, but this would be trivial. I'd be a "theist" in this respect.

David Mathers🔸 @ 2025-06-25T14:49 (+3)

So your claim is that  naturalists are just stipulating a particular meaning of their own for moral terms? Can you say why you think this? Don't some naturalists just defend the idea that moral properties could be identical with complex sociological properties without even saying *which* properties? How could those naturalists be engaging in stipulative definition, even accidentally? 

I'd also say that this only bears on the truth/falsity of naturalism fairly indirectly. There's no particular connection between whether naturalism is actually true and whether some group of naturalist thinkers happen to have stipulatively defined a moral term, although I guess if most defenses of naturalism did this, that would be evidence that naturalism couldn't be defended in other ways, which is evidence against it's truth.

LanceSBush @ 2025-06-25T15:12 (+3)

No, sorry, I don't think they're just stipulating or at least I don't think they take themselves to be doing so. That's just an example of how terms can be trivial. Apologies for not being clear about that. 

I don't think naturalists are literally just stipulating terms in this way. But if all they are doing amounts to terminological relabeling, then their account would be trivial in the same respect. If they're doing more than this, then they're welcome to offer an account of what that is; it's going to vary from one account to another but the triviality of naturalist accounts runs deep and takes multiple forms. For instance, if the realist's account only furnishes us with descriptive facts, then it lacks the sort of normative authority non-naturalist realists try to retain. So we end up with "moral facts" that have no practical relevance. For example, suppose someone says "Moral facts are facts about what increases or decreases wellbeing."

Even if I granted that this is true, what practical relevance does this have? As far as I can tell: none whatsoever. If naturalists disagree they're welcome to explain how there is any practical relevance to making such a discovery.

Don't some naturalists just defend the idea that moral properties could be identical with complex sociological properties without even saying *which* properties


Yes. And you have synthetic naturalist accounts that purport to identify the moral facts with various types of natural facts that aren't reducible to some kind of analytic claim. This is how I take Sterelny and Fraser's account. I reject those accounts not for the same reasons as analytic naturalist accounts but but for other reasons, e.g., empirical inadequacy and other forms of triviality.

I'd also say that this only bears on the truth/falsity of naturalism fairly indirectly.

I agree. I'm not necessarily going to insist naturalist accounts are false. I tend to argue for a trilemma: that all versions of moral realism are trivial, false, or unintelligible. I don't think the worst thing a theory can be is necessary false; triviality can be a problem for an account as well. I should note, too, that I'm a pragmatist, so there's a sense in which the triviality of an account can collapse into or play a role in its falsehood as well, on my view. But I try not to impose pragmatist preconceptions on others.

Personally I think naturalist accounts of realism "miss the point." My concern is with rejecting irreducible normativity, external reasons, normative authority, and so on. Naturalist accounts don't even seem to be in the business of trying to do this.

David Mathers🔸 @ 2025-06-25T20:32 (+2)

If you think there might well be forms of naturalism that are true but trivial, is your credence in anti-realism really well over >99%?

This forum probably isn't the place for really getting into the weeds of this, but I'm also a bit worried about accounts of triviality that conflate a priority or even analyticity and triviality: Maths is not trivial in any sense of "trivial" on which "trivial" means "not worth bothering with". Maybe you can get out of this by saying maths isn't analytic and it's only being analytic that trivializes things, but I don't think it is particulary obvious that there is a sense making concept of analyticity that doesn't apply to maths. Apparently Neo-Fregeans think that lots of maths is analytic, and as far as I know that is a respected option in the philosophy of math: https://plato.stanford.edu/entries/logicism/#NeoFre

I also wonder about exactly what is being claimed to be trivial: individual identifications of moral properties with naturalistic properties, if they are explicitly claimed to be analytic? Or the claim that moral naturalism is true and there are some analytic truths of this sort? Or both?

Also, do you think semantic claims in general are trivial? 

Finally, do you think the naturalists whose claims you consider "trivial" mostly agree with you that their views have the features that you think make for triviality but disagree that having those features means their views are of no interest. Or do most of them think their claims lack the features you think make for triviality? Or do you think most of them just haven't thought about it/don't have a good-faith substantive response?



 

LanceSBush @ 2025-06-26T01:38 (+3)

These are good questions and reasonable concerns, and I share the sense that this forum may not be ideal for addressing these questions. So I’d be happy to move the discussion away from here. For now, I’ll say a few things. Regarding accounts of triviality conflating a priority or analyticity with triviality: I don’t think I am conflating anything; I do think they’re trivial in the relevant respects. Take a possible naturalist account: moral facts are facts about what increases or decreases wellbeing. Okay, well I already think there are facts about what increases or decreases wellbeing. 

What difference does it make if those are moral facts? I have concerns about what that might even mean, but even if I set those aside, what practical difference would this make? As far as the synthetic accounts, I might just reject them on other grounds aside from triviality, but insofar as they also reduce moral claims to descriptive facts, there still looks to me like a kind of triviality there: no set of descriptive facts, in and of themselves, necessarily have any practical relevance to me. In other words, from a practical and motivational perspective, discovering that something is a “moral fact” simply doesn’t make any difference to me, and it’s not clear to me why it would make any difference to anyone else. Suppose, for instance, that someone like Oliver Scott Curry is correct, and that moral facts are facts about what promotes cooperations within groups. Okay. Now what? What do I do with this information? 

Maths is not trivial in any sense of "trivial" on which "trivial" means "not worth bothering with".

I'm not sure the comparison would be apt, given that I think of math as a kind of useful social construction. Math is extremely useful, but it’s useful in a way contingent on our goals and purposes. If we devised moral systems specifically to serve some purpose or goal in the way we did math, I might very well consider them nontrivial...but I'd also be an antirealist about them.

Maybe you can get out of this by saying maths isn't analytic and it's only being analytic that trivializes things, but I don't think it is particulary obvious that there is a sense making concept of analyticity that doesn't apply to maths

I’m an empiricist and a pragmatist and I think the mathematical systems we develop and use earn their keep through their application to our ends. I don’t think the same is true of naturalist accounts of moral realism. It’d be hard to do many things without math. Conversely, if there were no stance-independent moral facts of the sort naturalists believe in, I struggle to see what difference it would make in principle.

Also, do you think semantic claims in general are trivial? 

I’m not sure quite what you have in mind, so I’m not sure.

Finally, do you think the naturalists whose claims you consider "trivial" mostly agree with you that their views have the features that you think make for triviality but disagree that having those features means their views are of no interest.

It depends on which objection I’m raising. That the moral facts are descriptive and lack some of the features non-naturalists claim moral facts have? Yea, probably. That they’re just engaged in a pointless activity of figuring out how English speakers use moral language? Probably not. Since we’re operating on different accounts of truth and probably have other differences in our views, there’s likely to be more fundamental differences, too. I’m not really sure. I should probably just talk to more naturalists and maybe read more contemporary work. FWIW, some of the more traditional antirealists I’ve spoken to don’t share my objections and don’t think they’re on the right track. 

Or do you think most of them just haven't thought about it/don't have a good-faith substantive response?

I doubt that. Most naturalists have thought about their positions much more than I have, and would probably have a number of corrections to make of my characterization of their views.

ThomasEliot @ 2025-06-25T19:58 (+1)

>So your claim is that  naturalists are just stipulating a particular meaning of their own for moral terms? Can you say why you think this?

In this instance, Bentham seems to be stipulating that "morally correct" means "agrees with Bentham's intuitions". I think this because he consistently says that things "seem" that way to him, without taking into account how they seem to other people. 

LanceSBush @ 2025-06-25T20:09 (+4)

I don't think Bentham is stipulating that. I think he's treating intuitions as providing epistemic access to the moral facts. For comparison, it'd be a bit like arguing that trees exist because you can see them, and so can most other people. This wouldn't be the same as saying that, as a matter of stipulation, for a tree to "exist" means that "it appears to exist to you personally."

I'll note: someone suggested elsewhere in this thread that some of the terms and ways I frame my objections to Bentham suggest I wouldn't be fair to him. I'll note that here I am defending Bentham against what I take to be an inaccurate characterization.

ThomasEliot @ 2025-06-25T20:16 (+1)

I think he's treating intuitions as providing epistemic access to the moral facts

I think he's treating his intuitions that way. He does not seem to be treating intuitions in general that way, since he doesn't address things like how throughout most people have had the intuition that treating the outgroup poorly is morally good, nor how I have the intuition that it is immoral to claim access to moral facts. 

LanceSBush @ 2025-06-25T20:21 (+2)

Presumably he has an answer to that; I still don't think he's stipulating things as you suggest, but I am sympathetic to the concern you raise, which to me appears to be the systematic and longstanding variation in normative moral intuitions.

Another, related problem is variation in metaethical positions/"seemings." Bentham makes all sorts of remarks about how things seem to him that only make sense if you're a moral realist, but things don't seem that way to me. If they seem any way at all, its the exact opposite.

ThomasEliot @ 2025-06-25T21:01 (+2)

I still don't think he's stipulating things as you suggest

That's fair. I suppose that I was attempting to translate his statements into something that I could understand rather than taking them literally, as I should have. 

Richard Y Chappell🔸 @ 2025-06-24T18:19 (+27)

Something I wish more internet anti-realists were aware of is that most anti-realist metaethicists these days (like Derek!) endorse a form of expressivism or "quasi-realism" on which they can sincerely affirm claims like:

"It's wrong to hit a baby with a hammer no matter what anyone thinks about it"

because (on their view) what this claim expresses is not a belief (that is in any way beholden to what does or doesn't exist in the world) but rather a non-cognitive attitude of opposing hitting babies with hammers no matter what anyone thinks about it.

The tricky thing about this view is that we're naturally inclined to read the "no matter what anyone thinks about it" phrase as external to the moral judgment, asserting its metaethical objectivity, whereas they want to reinterpret it as internal to the judgment, and hence simply part and parcel of the attitude that they are happy to express without any metaphysical commitments whatsoever.

One reason this is worth clarifying, especially in this Forum, is that I think the metaethical question matters much less, practically speaking, than the first order question of whether you're willing to engage in normative moral discourse. Quasi-realists and expressivists are 100% on board with this. Internet anti-realists often aren't: they'll react to a moral argument by saying things like, "That's just your opinion man, you're assuming moral realism, we can't assert anything that other people don't agree with."

I'd like to encourage more internet anti-realists to read their Gibbard, Blackburn, etc., and adopt a more sophisticated anti-realism that's compatible with engaging in normative moral discourse.

LanceSBush @ 2025-06-25T13:17 (+6)

Online antirealists should have more developed views, but I'd discourage them from endorsing quasi-realism. That position is already far to concessive to moral realists, granting incorrectly that much of ordinary moral discourse has realist qualities we want to conserve and vindicate. Analytic metaethics more generally is a problem, and better and more sophisticated antirealist approaches would eschew much of the contemporary analytic dialectic.

Conversely, I'd like to encourage internet moral realists to present better arguments than they do. Bentham's arguments in this post are not good, nor are they sophisticated. I have already offered an extensive rebuttal to them that Bentham never addressed. Most of Bentham's arguments rely on unqualified claims about how things "seem." Things don't seem to me the way they do to Bentham, and there is little empirical evidence that they seem that way to others. Moral realists, especially non-naturalists, often appeal to moral intuition and phenomenology in ways that fail to engage with contemporary empirical research on the psychology of metaethics, too. There is little evidence that most people share Bentham's intuitions or stance towards morality, and yet Bentham has not adequately engaged with this research.

Richard Y Chappell🔸 @ 2025-06-25T20:37 (+4)

Why would it matter what "most people" think? Arguments are invitations; if the premises don't speak to you, you're free to decline the invitation. But that no more makes it a "bad argument" than does failing to appeal to you (or even a majority of people) mean that a party is a "bad party". The better test of argumentative quality is whether it is surprising, illuminating, or helpful to (some or enough of) the target audience -- those who antecedently agree with the premises, and must thus grapple with the choice of whether to revise those beliefs or come to accept the conclusion.

LanceSBush @ 2025-06-25T23:15 (+3)

I am tempted to write an even lengthier explanation for why I think it matters, but I’ll try to keep it shorter. The sense in which I am talking about what most people think is with respect to what they mean by what they say when they make moral claims. And I take what people mean by what they say to be relevant to the assessment of the meaning of ordinary moral language. So, if, for instance, most people speak, think, or act like moral antirealists, or at least not like realists, this would be relevant to the respective plausibility of realism and antirealism in various ways.

At least one way this would be relevant is that it would undercut claims from moral realists that moral realism is a “common sense” view. Many moral realists appeal to the presumptively realist features of moral discourse. If they are mistaken about this, then this undercuts at least some appeals to a presumption in favor of moral realism. Others claim that people generally experience morality in ways more in line with realism. If this isn’t true, this would undercut such claims as well.

Generally speaking, then, realists often appeal to a presumption in favor of realism predicated on the allegedly realist-features of ordinary moral discourse, realist features of ordinary moral experience, and so on. This is sometimes leveraged to shift the burden of proof onto antirealists. Antirealists are described as having “radically skeptical” views, for instance. 

If it turns out that realist inclinations and realist construals of moral claims are idiosyncratic, not representative of ordinary thought and language, and largely parochial features of the way academic philosophers are inclined to speak and think, this would undercut these sorts of appeals.

I don’t think moral realism relies on presumptive arguments, but they fairly common. A couple quick comments related to this:

First, I think the most plausible interpretation of Bentham’s remarks about how things seem, what’s counterintuitive, and especially the claim that views contrary to his are “crazy” hints at least a bit in the direction that Bentham doesn’t merely mean to report how things seem to him, or that he’s simply reporting his own proprietary use of moral language. The notion that those who disagree are “crazy” carries connotations that anyone who doesn’t react the way he does is making an error of some kind. But if he is just reporting how things seem, I do wonder why he so often opts to describe contrary views or intuitions as "crazy."

Second, realists may dispute whether what people “think” is relevant to what those people mean, or relevant to the armchair analysis of the meaning of moral sentences. In that case, I’d likely disagree with whatever account of language and/or whatever methodological approach they’re taking to addressing these questions, in which case any positions they take that are downstream of this disagreement rely on contested views about language, meaning, and methods. In that case, what people think is indirectly relevant insofar as its relevant to my views, at least, and if they don’t agree that what people think is relevant then we disagree on a more fundamental matter.

There’s a lot more I could say, e.g., about the descriptive metaethics project of the 20th century and the seemingly close historical connection between ordinary moral language and judgment and the typical construal of metaethical positions, but I’ll leave it there for now. I’d be happy to discuss the matter more with you any time!

> Arguments are invitations; if the premises don't speak to you, you're free to decline the invitation. But that no more makes it a "bad argument" than does failing to appeal to you (or even a majority of people) mean that a party is a "bad party". 

My reasons for thinking Bentham’s arguments are bad are not exclusively based on the fact that appeals to how things seem may or may not reflect how they seem to me or to people in general. I present a variety of other objections in the post I referenced, and it is this set of objections taken together that are the basis for my claim that Bentham’s arguments are not good. 

As far as them not being sophisticated: I stand by that. Bentham does not present well-developed arguments as far as cases for moral realism go. His arguments aren’t very clear or well-organized, he doesn’t unpack what he means by much of what he says, he doesn’t bring up many of the standard or some of the more obscure arguments one might bring up in favor of moral realism (so we don’t get much of a cumulative case that appeals to multiple independent arguments), and he doesn’t do much to anticipate or respond to the kinds of objections one might receive from antirealists. In short, his arguments are narrow and underdeveloped. I say this as someone who regularly reads his blog and has seen him present far more compelling arguments on other topics (even though I frequently disagree with him). So I know Bentham is more than capable of making a stronger case than he has here.

Lukas_Gloor @ 2025-06-24T23:09 (+26)

I voted 65% but I think anti-realism is obviously true or we're using words differently.

To see whether we might be using words differently, see this post and this one

To see why I still voted 65% on "objective" and not 0%, see this post. (Though, on the most strict meanings of "objective," I would put 0%.) 

If we agree on what moral realism means, here's the introduction to the rest of my sequence on why moral realism is almost certainly false.

Aaron Bergman @ 2025-06-24T22:37 (+25)

I'm continually unsure how best to label or characterize my beliefs. I recently switched from calling myself a moral realist (usually with some "but its complicated" pasted on) to an "axiological realist."

I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like "objectively it would be better to promote happiness over suffering"

But I'm not sure I see the basis for making some additional leap to genuine normativity; I don't think things like objective ordering imply some additional property which is strongly associated with phrases like "one must" or "one should". 

Of course the label doesn't matter a ton, but I'm curious both what people think of as the appropriate label for such a set of beliefs and what people think of it on the merits.

(For those interested, I recorded a podcast on this with @sarahhw and @AbsurdlyMax a while back)

Eli Rose @ 2025-06-25T00:56 (+16)

I'm not an axiological realist, but it seems really helpful to have a term for that position, upvoted.

Broadly, and off-topic-ally, I'm confused why moral philosophers don't always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.

Richard Y Chappell🔸 @ 2025-06-25T20:43 (+8)

Alastair Norcross is a famous philosopher with similar views. Here's the argument I once gave him that seemed to convert him (at least on that day) to realism about normative reasons:

First, we can ask whether you'd like to give up your Value Realism in favour of a relativistic view on which there's "hedonistic value", "desire-fulfilment value", and "Nazi value", all metaphysically on a par.  If not -- if there's really just one correct view of value, regardless of what subjective standards anyone might arbitrarily endorse -- then we can raise the question of why normative reasons don't move in parallel.  Surely an account of reasons for action that is grounded in facts about what's genuinely valuable is superior to an alternative account that bears no connection to the true value facts?

JackM @ 2025-06-27T23:30 (+2)

This just seems to be question-begging. It just seems to me you're saying "axiological realism gives rise to normative realism because surely axiological realism gives rise to normative realism".

Pablo @ 2025-06-25T13:12 (+4)

This is basically my view, and I think ‘axiological realism’ is a great name for it.

LanceSBush @ 2025-06-25T03:14 (+3)

Why do you think some states of the world are objectively better than others, or that pleasure is inherently good? I suppose I can go check out the podcast, but I'd be happy to have a discussion with you here.

Aaron Bergman @ 2025-06-25T03:56 (+16)

Assuming we're not radically mistaken about our own subjective experience, it really seems like pleasure is good for the being experiencing it (aside from any function or causal effects it may have).

In fact, pleasure without goodness in some sense seems like an incoherent concept. If a person was to insist that they felt pleasure but in no sense was this a good thing, I would say that they are mistaken about something, whether it be the nature of their own experience or the usual meaning of words.

Some people, I think, concede the above but want to object that lower-case goodness in the sense described is distinct from some capital-G objective Goodness out there in the world.

But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course). There's no spooky metaphysical sense in which it's written into the stars; it is simply directly implied by the  facts about what some things are like to some sentient beings.

I'd add that the above logic holds fine, and with even more rhetorical and ethical force, in the case of suffering.

Now if you accept the above, here's a simple thought experiment: consider two states of the world, identical in every way except in world A you're experiencing a terrible stomach ache and in world B you're not.

The previous argument implies that there is simply more badness in world A, full stop.

Much more to be said ofc but I'll leave it there :)

LanceSBush @ 2025-06-25T13:25 (+2)

When you say “Assuming we’re not radically mistaken”...you’re using the term “we” as though you’re assuming I and others agree with you. But I don’t know if I agree with you, and there’s a good chance I don’t. What do you mean when you say that pleasure is good for the being experiencing it? For that matter, what do you mean by “pleasure”? If “pleasure” refers to any experience that an agent prefers, and for something to be good for something is for them to prefer it, then you’d be saying something I’d agree with: that any experiences an agent prefers are experiences that agent prefers. But if you’re not saying that, then I am not sure what you are saying.

I think there are facts about what is good according to different people’s stances. So my pleasure can be good according to my stance. But I do not think pleasure is stance-independently good

In fact, pleasure without goodness in some sense seems like an incoherent concept.

What do you mean by “goodness”?

But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course).

I’m perfectly fine with saying that there are facts about what individuals prefer and consider good, but the fact that something is good relative to someone’s preferences does not entail that it is good simpliciter, good relative to my preferences, intrinsically good, or anything like that. The fact that this person is a “valid element of the world/universe” doesn’t change that fact. 

There's no spooky metaphysical sense in which it's written into the stars; it is simply directly implied by the  facts about what some things are like to some sentient beings.

What you’re saying doesn’t strike me so much as metaphysically spooky but as conceptually underdeveloped. I don’t think it’s clear (at least, not to me) what you mean when you refer to goodness. For instance, I cannot tell if you are arguing for some kind of moral realism or normative realism. 

Now if you accept the above, here's a simple thought experiment: consider two states of the world, identical in every way except in world A you're experiencing a terrible stomach ache and in world B you're not.

The previous argument implies that there is simply more badness in world A, full stop.

What would it mean for there to be “more badness” in world A? Again, it’s just not clear to me what you mean by the terms you are using.

Manuel Del Río Rodríguez 🔹 @ 2025-06-25T07:18 (+1)

I think I concede that 'pleasure is good for the being experiencing it'. I don't think this leads to were you take it, though. It is good for me to eat meat, but probably it isn't good for the animal. But in the thought experiment you make, I prefer world A where I'm eating bacon and the pig is dead than world B where the pig is feeling fine and I'm eating broccoli. You can't jump from what's good for one to what's good for many. But besides, granting something is good for he who experiences is feels likes bit broad: the good for him doesn't make it into some law that must be obeyed, even form him/her. There are trade-offs between other desires, you might also want to consider (or not) long-term effects, etc... It also has no ontological status as 'the good', just as there is no Platonic form of 'the good' floating in Platonic heaven.

Daniel_Friedrich @ 2025-06-25T21:14 (+1)

I think objective ordering does imply "one should" so I subscribe to moral realism. However, recently I've been highly appreciating the importance of your insistence that the "should" part is kind of fake - i.e. it means something like "action X is objectively the best way to create most value from the point of view of all moral patients" but it doesn't imply that an ASI that figures out what is morally valuable will be motivated to act on it.

(Naively, it seems like if morality is objective, there's basically a physical law formulated as "you should do actions with characteristics X". Then, it seems like a superintelligence that figures out all the physical laws internalizes "I should do X". I think this is wrong mainly because in human brains, that sentence deceptively seems to imply "I want to do x" (or perhaps "I want to want x") whereas it actually means "Provided I want to create maximum value from an impartial perspective, I want to do x". In my own case, the kind of argument for optimism around AI doom in the style that @Bentham's Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)

Derek Shiller @ 2025-06-24T16:44 (+25)

I consider myself a pretty strong anti-realist, but I find myself accepting a lot of the things you take to be problems for anti-realism. For instance:

But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.

I think that these things really are wrong and don't depend on what people think about it. But I also think that that statement is part of a language game dictated by complex norms and expectations. The significance of thought experiments. The need to avoid inconsistency. The acceptance of implications. The reliance on gut evaluations. Endorsement of standardly accepted implications. Etc. I live my life according to those norms and expectations, and they lead me to condemn slavery and think quite poorly of slavers and say things like 'slavery was a terrible stain on our nation'. I don't feel inclined to let people off the hook by virtue of having different desires. I'm quite happy with a lot of thought and talk that looks pretty objective.

I'm an anti-realist because I have no idea what sort of thing morality could be about that would justify the norms and expectations that govern our thoughts about morality. Maybe this is a version of the queerness argument. There aren't any sorts of entities or relations that seem like appropriate truth-makers for moral claims. I have a hard time understanding what they might be such that I would have any inclination to shift what I care about were I to learn that the normative truths themselves were different (holding fixed all of the things that currently guide my deployment of moral concepts). If my intuitions about cases were the same, if all of the theoretical virtues were the same, if the facts in the world were the same, but an oracle were to tell me that moral reality were different in some way -- turns out, baby torture is good! -- I wouldn't be inclined to change my moral views at all. If I'm not inclined to change my views except when guided by things like gut feelings, consistency judgments, etc. then I don't see how anything about the world can be authoritative in the way that realism should require.

Bentham's Bulldog @ 2025-06-24T17:58 (+2)

//I think that these things really are wrong and don't depend on what people think about it. But I also think that that statement is part of a language game dictated by complex norms and expectations.// 

To me this sounds a bit like moral naturalism.  You don't think morality is something non-physical and spooky but you think there are real moral facts and these don't depend on our attitudes.  

I guess I don't quite see what your puzzlement is with morality.  There are moral norms which govern what people should do.  Now, you might deny there in fact are such things, but I don't see what's so mysterious.  

Richard Chappell had a nice post about the last kind of objection https://www.philosophyetc.net/2021/10/ruling-out-helium-maximizing.html

I also wrote something about this a while ago https://benthams.substack.com/p/contra-bush-on-moral-fetishism?utm_source=publication-search

Derek Shiller @ 2025-06-24T18:50 (+14)

I think of moral naturalism as a position where moral language is supposed to represent things, and it represents certain natural things. The view I favor is a lot closer to inferentialism: the meaning of moral language is constituted by the way it is used, not what it is about. (But I also don't think inferentialism is quite right, since I'm not into realism about meaning either.)

I guess I don't quite see what your puzzlement is with morality. There are moral norms which govern what people should do. Now, you might deny there in fact are such things, but I don't see what's so mysterious.

Another angle on the mystery: it is possible that there are epistemic norms, moral norms, prudential norms, and that's it. But if you're a realist, it seems like it should also be possible that there are hundreds of other kinds of norms that we're completely unaware of, such that we act in all sorts of wrong ways all the time. Maybe there are special norms governing how you should brush your teeth (that have nothing to do with hygiene or our interests), or how to daydream. Maybe these norms hold more weight than moral norms, in something like the way moral norms may hold more weight than prudential norms. If you're a non-naturalist, then apart from trust in a loving God, I'm not sure how you address this possibility. But it also seems absurd that I should have to worry about such things.

Noah Birnbaum @ 2025-06-24T17:42 (+20)

Morality is Objective

Evolutionary debunking arguments - we can explain the vast majority of moral beliefs without positing the existence of extra substances -- therefore, we shouldn't posit them! 

Richard Y Chappell🔸 @ 2025-06-24T17:59 (+15)

We can also explain this epistemic normative belief of yours without positing that it's true, therefore...?

Noah Birnbaum @ 2025-06-24T18:07 (+15)

I don't think there are any normative facts, so you can finish that sentence, if you'd like. In other words, I don't think there's no objective feature in the world that tells you that you need to have x beliefs instead of y beliefs. If one did actually believe this, I'm curious about how this would play out (i.e. should someone do a bunch of very simple math equations all the time because they could gain many true beliefs very quickly? Seems weird). 

On just having true beliefs, I would say that when you give some ontology of how the world works, you'd expect evolution to give us truth-tracking beliefs and or processes in many instances because it is actually useful for survival/reproduction (though it would also give us wrong beliefs, but we do see this -- i.e. we believe in concepts that don't REALLY carve reality like chairs because they're useful). 

David_Moss @ 2025-06-25T10:45 (+3)

It's at least possible that one can 'contain' debunking arguments, such that they don't extend across domains and self-undermine. We discuss this strategy in our chapter here.

Bentham's Bulldog @ 2025-06-24T18:00 (+4)

See my reply :)

Noah Birnbaum @ 2025-06-24T18:09 (+5)

Just gonna have to write a reply post, probably 

Neel Nanda @ 2025-06-26T01:47 (+12)

Morality is Objective

What would this even mean? If I assert that X is wrong, and someone else asserts that it's fine, how do we resolve this? We can appeal to common values that derive this conclusion, but that's pretty arbitrary and largely just feels like my opinion. Claiming that morality is objective just feels groundless. 

Owen Cotton-Barratt @ 2025-06-26T09:46 (+4)

Locally, I think that often there will be some cluster of less controversial common values like "caring about the flourishing of society" which can be used to derive something like locally-objective conclusions about moral questions (like whether X is wrong).

Globally, an operationalization of morality being objective might be something like "among civilizations of evolved beings in the multiverse, there's a decently big attractor state of moral norms that a lot of the civilizations eventually converge on".

Neel Nanda @ 2025-06-27T03:22 (+5)

Less controversial is a very long way from objective - why do you think that "caring about the flourishing of society" is objectively ethical?

Re the idea of an attractor, idk, history has sure had lot of popular beliefs I find abhorrent. How do we know there even is convergence at all rather than cycles? And why does being convergent imply objective? If you told me that the supermajority of civilization concluded that torturing criminals was morally good, that would not make me think it was ethical.

My overall take is that objective is just an incredibly strong word for which you need incredibly strong justifications, and your justifications don't seem close, they seem more about "this is a Schelling point" or "this is a reasonable default that we can build a coalition around"

Robi Rahman @ 2025-06-26T13:24 (+3)

No, that wouldn't prove moral realism at all. That would just show you and a bunch of aliens happen to have the same opinions.

Owen Cotton-Barratt @ 2025-06-26T13:47 (+4)

See my response to Manuel -- I don't think this is "proving moral realism", but I do think it would be pointing at something deeper and closer-to-objective than "happen to have the same opinions".

Manuel Del Río Rodríguez 🔹 @ 2025-06-26T13:15 (+1)

I don't think I have much to object to that, but I do think that doesn't look at all like 'stance independent' if we're using that as the criterion for ethical realism. What you're saying seems to boil down, if I understand it correctly is 'given a bunch of intelligent creatures with some shared psychological perceptions of the world and some tendency towards collaboration, it is pretty likely they'll end up arriving at a certain set of shared norms that optimize towards their well-being as a group -and in most cases, as individuals-. That makes the 'state of moral norms that a lot of the civilizations eventually converge on' something useful for ends x, y, z, but not 'true' and 'independent of human or alien minds'.

Owen Cotton-Barratt @ 2025-06-26T13:45 (+4)

I'm not sure what exactly "true" means here.

Here are some senses in which it would make morality feel "more objective" rather than "more subjective":

  • I can have the experience of having a view, and then hearing an argument, and updating. My stance towards my previous view then feels more like "oh, I was mistaken" (like if I'd made a mathematical error) rather than "oh, my view changed" (like getting myself to like the taste of avocado when I didn't used to).
  • There can exist "moral experts", whom we would want to consult on matters of morality. Broadly, we should expect our future views to update towards those of smart careful thinkers who've engaged with the questions a lot.
  • It's possible that the norms various civilizations converge on represent something like "the optimal(/efficient?/robust?) way for society to self-organize"
    • I don't think this is exactly "independent of human or alien minds", but it also very much doesn't feel "purely subjective"

I don't really believe there's anything more deeply metaphysical than that going on with morality[1], but I do think that there's a lot that's important in the above bullets, and that moral realist positions often feel vibewise "more correct" than antirealist positions (in terms of what they imply for real-world actions), even though the antirealist positions feel technically "more correct".

  1. ^

    I guess: there's also some possibility of getting more convergence for acausal reasons rather than just evolution towards efficiency. I do think this is real, but it mostly feels like a distraction here so I'll ignore it.

Manuel Del Río Rodríguez 🔹 @ 2025-06-26T15:30 (+1)

Terminology can be a bugger in these discussions. I think we are accepting, as per BB's own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying 'electrons exist' is true because of their correspondence to objects of an external, human-independent reality.

What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, 'Vanilla ice-cream is the best flavor' but not as objective as, say 'An electron has a negative charge', as living in some space of in-betweeness with respect to those two extremes. I'd still call this anti-realism, as you're just switching from a maximally subjective stance (an individual's particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I'd say again, an electron doesn't care for what a human or any other creature thinks about its electric charge.

As for each of the bullet points, what I'd say is:

  1. I can see why you'd feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn't imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
  2. I, on the contrary, don't feel like there could be 'moral experts' - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I've seen -there's a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don't seem to behave more morally than their colleagues and similar social and intellectual peers.
  3. Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It's functional success, not moral facthood. 
Owen Cotton-Barratt @ 2025-06-26T17:08 (+2)

is that you feel that moral statements are not as evidently subjective as say, 'Vanilla ice-cream is the best flavor' but not as objective as, say 'An electron has a negative charge', as living in some space of in-betweeness with respect to those two extremes

I think that's roughly right. I think that they are unlikely to be more objective than "blue is a more natural concept than grue", but that there's a good chance that they're about the same as that (and my gut take is that that's pretty far towards the electron end of the spectrum; but perhaps I'm confused).

I'd say again, an electron doesn't care for what a human or any other creature thinks about its electric charge.

Yeah, but I think that e.g. facts about economics are in some sense contingent on the thinking of people, but are not contingent on what particular people think, and I think that something similar could be true of morality.

I, on the contrary, don't feel like there could be 'moral experts'

The cleanest example I might give is that if I had a message from my near-future self saying "hey I've thought really hard about this issue and I really think X is right, sorry I don't have time to unpack all of that", I'd be pretty inclined to defer. I wonder if you feel differently?

I don't think that moral philosophers in our society are necessarily hitting the bar I would like for "moral expert". I also don't think that people who are genuinely experts in morality would necessarily act according to moral values. (I'm not sure that these points are very important.)

tylermjohn @ 2025-06-24T19:48 (+12)

Morality is Objective

 

It's epistemically inaccessible, explanatorily redundant, unnecessary for any pragmatic aim, just a relic of the way our language and cooperative schemes work. I'm not sure the idea can even really be made clear. Empirically, convergence through cooperative, argumentative means looks incredibly unlikely in any normal future. I voted for the strongest position because relative to my peers I have the most relativistic view I know and because of my high (>.99) credence in antirealism. But obviously morality is sort of kind of objective in certain contexts and among certain groups of interlocutors, given socially ingrained shared values. 

alexcoleridge @ 2025-06-24T18:02 (+12)

Morality is Objective

The best defence of this I have seen is Michael Huemer's Ethical Intuitionism. 

LanceSBush @ 2025-06-25T03:15 (+5)

What arguments for moral realism that Huemer presents do you think are the best? I don't think I've encountered any strong arguments for moral realism from Huemer or anyone else.

Manuel Del Río Rodríguez 🔹 @ 2025-06-25T07:21 (+4)

I'd like to hear more about this too. From a very simplified overview, what I seemed to get was the core of the arguments was just 'everything is reducible to intuitions, so moral intuitions are as good as any other, including those behind accepting logic or realist views of the world'.

Joseph_Chu @ 2025-06-24T17:59 (+10)

Morality is Objective

I've been a moral realist for a very long time and generally agree with this post.

I will caveat though that there is a difference between moral realism (there are moral truths), and motivational internalism (people will always act according to those truths when they know them). I think the latter is much less clearly true, and one of the primary confusions that occur when people argue about moral realism and AI safety.

I also think that moral truths are knowledge, and we can never know things with 100% certainty. This means that even if there are moral truths in the world (out there), it is very possible to still be wrong about what they are, and even a superintelligence may not figure them out necessarily. Like most things, we can develop models, but they will generally not be complete.

LanceSBush @ 2025-06-25T19:38 (+4)

I understand endorsing moral realism, but do you think Bentham presents any good arguments here?

Joseph_Chu @ 2025-06-25T19:56 (+3)

I'll admit I kinda skimmed some of Bentham's arguments and some of them do sound a bit like rhetoric that rely on intuition or emotional appeal rather than deep philosophical arguments.

If I wanted to give a succinct explanation for my reasons for endorsing moral realism, it would be that morality has to do with what subjects/sentients/experiencers value, and these things they value, while subjective in the sense that they come from the perceptions and judgments of the subjects, are objective in the sense that these perceptions and in particular the emotions or feelings experienced because of them, are true facts about their internal state (i.e. happiness and suffering, desires and aversions, etc.). These can be objectively aggregated together as the sum of all value in the universe from the perspective of an impartial observer of said universe.

LanceSBush @ 2025-06-25T20:07 (+2)

Thanks for the response. What you describe doesn't sound very objectionable to me, but I don't think it's what Bentham is arguing for. As far as I know, Bentham endorses non-naturalist moral realism, so he would not think that moral facts would be facts about natural phenomena such as our internal psychological states.

Joseph_Chu @ 2025-06-26T01:19 (+2)

Ah, good catch! Yeah, my flavour of moral realism is definitely naturalist, so that's a clear distinction between myself and Bentham, assuming you are correct about what he thinks.

Will Aldred @ 2025-06-25T18:00 (+9)

[resolved]

Meta: I see that this poll has closed after one day. I think it would make sense for polls like this to stay open for seven days, by default, rather than just one?[1] I imagine this poll would have received another ~hundred votes, and generated further good discussion, had it stayed open for longer (especially since it was highlighted in the Forum Digest just two hours prior).

@Sarah Cheng

  1. ^

    I’m unsure if OP meant for this poll to close so soon. Last month, when I ran some polls, I found that a bunch of them ended up closing after the default one day even after I thought I’d set them to stay open for longer.

Sarah Cheng @ 2025-06-25T18:19 (+6)

Ugh I agree yeah, thanks for flagging this! I re-opened the poll by manually updating it in the db, and we should increase the default duration of polls.

Micah Hauger 🔹 @ 2025-06-25T04:32 (+7)

Morality is Objective


I really appreciated this post. Like many here, I find it hard to make sense of our strongest moral convictions, especially about things like torture or slavery, without concluding that some moral facts are objective. As C.S. Lewis put it, we don’t call a line crooked unless we have some idea of a straight one.

I understand the concern that moral facts might seem metaphysically strange, but I don't think they are any stranger than logical or modal truths. Denying them also seems to undermine the kind of moral discourse we often want to have.

Personally, I ground morality in the character of God. Even putting that aside, I find moral anti-realism hard to accept. If nothing is really wrong, then what reason do we have to care about injustice beyond our own preferences?

I'm curious how anti-realists would approach serious moral disagreements, such as those involving human rights abuses, without appealing to something deeper than social consensus or personal feeling. Can we say "this is wrong" in any meaningful way if morality is only expressive or constructed?

ThomasEliot @ 2025-06-25T15:13 (+2)

Like many here, I find it hard to make sense of our strongest moral convictions, especially about things like torture or slavery, without concluding that some moral facts are objective.

To whom does "our" refer? Most people throughout history do not seem to have shared these intuitions. If "people intuit things as being good/right or bad/wrong" is evidence for their moral truth/falsity, then it seems clear that the positions with the most evidence supporting them are "it is wrong to torture or enslave the ingroup and right to torture and enslave the outgroup"

Can we say "this is wrong" in any meaningful way if morality is only expressive or constructed?

Yes, in the same way that we can make meaningful statements about the quality of art: either by expressing subjective opinions or by defining terms and discussing it in terms of those. 

Manuel Del Río Rodríguez 🔹 @ 2025-06-25T07:42 (+2)

I understand the concern that moral facts might seem metaphysically strange, but I don't think they are any stranger than logical or modal truths.

Not a Philosophy major, so you'll have to put up with my lack of knowledge, but I think I'd say that logical truths are contingent on the axioms being true, which is determined by how well they seem to match the world and our perceptions of it in the first place. And there are alternatives to classical logic that are 'as true' and generate logical truths as valid as those of classical logic. Not sure about modal truths -it is not something I've read about yet-. To the extent I grasp them, they appear constructed or definitional, not absolute, i.e.:

“A square cannot be round.” → because of how you define a square

It is possible that life exists on other planets.” → the question is about probabilities

“Necessarily, 2 + 2 = 4.” → Only if Peano Axioms and ZFC is assumed

I'm curious how anti-realists would approach serious moral disagreements, such as those involving human rights abuses, without appealing to something deeper than social consensus or personal feeling. Can we say "this is wrong" in any meaningful way if morality is only expressive or constructed?

Can't speak for others, but can for myself. I'd say that first, some preferences are widely agreed upon to begin with (at least in liberal, Western societies). When there's a conflict, we have the framework of societal rules and norms to solve it, and which we accept as the best scenario for maximizing our individual well-being, even if it comes with some trade-offs at times. If there's a serious disagreement between my preferences and those encoded in the rules, norms and contracts, I try to change those through the appropriate channels. If I fail and ii is something non-negotiable to me, I would have to leave my society and go to another that is better attuned to me.

Manuel Del Río Rodríguez 🔹 @ 2025-06-24T20:32 (+7)

Strong disagree. I am not closed to being persuaded on this, though, but I haven't found your arguments convincing yet.

Even before going into details, though, I'd like to start with the end. I see that you find it intuitively very hard to reject the stance-independent wrongness of torture. If it boils down to intuitions, I find it as hard to accept that morality could be anything other than a human invention that is useful for some instrumental needs, and nothing more.

I am still starting to explore the philosophical grounds for my intuitions, but at the moment, I think a valid summary is something like this:

Moral Anti-Realism: moral statements do not express stance-independent truths. There is no objective moral reality analogous to mathematical or physical facts.

Contractarian Ethics: moral obligations are agreements between rational agents. Ethics emerges from social contracts (negotiated, context-sensitive rules for mutual benefit) not from metaphysical truths.

Subjective Preference: Moral norms are built from individual preferences, desires, and aversions filtered through the pragmatic need to live together peacefully and negotiate conflicts. Some preferences (e.g. for not being tortured) are near-universal, but still not “objective.”

Rationality is procedural and instrumental: it is about coherently pursuing one’s preferences and goals, given the available information, constraints and beliefs.

Skeptic of all intuitions: Moral intuitions are evolved (biologically and culturally) emotional heuristics which we've also internalized, policed and indoctrinated into since childhood.

Nitpicking some of the stuff you talk about:

But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.

'Seems' is a verb you use a lot all through this section. Lots of things seem, but we've learned not to trust intuitions. The sun seems to move and rise in the East. With empirical stuff, we can at least make some observations and measurements, develop some theories and put them to the test. We can't seem to have anything similar with ethics. A plausible explanation for these things you list seem morally true to us is the same as for why, from the top of  skyscraper in a city, the streets below all seem to radiate outward from your position. We are Westerners, we are part of a culture with specific values that has, because of accidents, been tremendously materially, economically and politically successful. It is easy to imagine we are, if not at a pinnacle, 'in the right road of history' and that all in the present will have to converge and just deep our project. I find it much more likely that 500 years from now, our successors -probably non WEIRD people, perhaps AI- will look with the same contempt to our moral fantasies as we have for the cults of the Roman and Babylonian gods. You're assuming as obvious a narrative of lineal moral progress which I think is really open to disputation.

If I have a reason to prevent my own suffering, it seems that suffering is bad, which gives me a moral reason to prevent it.

Suffering is bad for me. It seems plausible to assume it will also be so for others, which means I should use this part of information as part of the bargaining set of tools for game-theoretically negotiating with others the satisfaction of my preference with the minimal sacrifice one can get away with while maximizing the overall results (but just because the latter ultimately give a bigger satisfaction of my own than in the lack of agreements and contracts).

But this means that moral anti-realists must think that you can never have a reason to care about something independ of what you actually do care about. This is crazy as shown by the following cases

I fail to see where you're going with these contrived examples of yours. Like, what people desire is (I'd say always, but let's caveat it a bit) what gives them pleasure. It is not plausible to consider cases where this is not the case. But even if that wasn't the case, I don't see the irrationality even in these examples. You’re assuming a very specific, value-laden view of rationality -one that says people are “irrational” if they pursue ends you see as harmful, malformed, or futile. But I imagine anti-realists view rationality as I stated above: as consistency between means and ends. If someone has strange or harmful goals, that may be sad or tragic to you, but it’s not irrational on their terms. You’re just begging the question by smuggling in your own evaluative framework as if it were universal.

But just as there are visual appearances, there are intellectual appearances. Just as it appears to me that there’s a table in front of me, it appears to me that it’s wrong to torture babies. Just as I should think there’s a table absent a good reason to doubt it, I should think it’s wrong to torture babies. In fact, I should be more confident in the wrongness of torturing babies, because that seems less likely to be the result of error. It seems more likely I’m hallucinating a table than that I’m wrong about the wrongness of baby torture.

This analogy fails because it treats moral intuition like sensory perception, but without acknowledging the critical difference: empirical perceptions are testable, correctable, and embedded in a shared external reality. I might trust that I see a table but I can measure it, predict how it behaves, let others confirm it. Moral intuitions don’t offer that. They’re not observable facts but untestable gut reactions. Saying “I just see that baby torture is wrong” is not evidence, it’s a psychological datum, not a method of discovery. You’re proposing a methodology where feeling intensely about something counts as knowing it, even in the absence of any testing, mechanism, or independent verification. That’s not realism; it’s intuitionism dressed as epistemology.

We all begin inquiry from things that “seem right”, but in empirical and mathematical domains, we don’t stop there. We test, predict, measure, or prove. That’s the key difference: perception and intuition may guide us initially, but scientific realism and mathematical Platonism justify beliefs by their explanatory power, coherence, and predictive success. In contrast, moral realism lacks any comparable mechanism. You can’t test a moral intuition the way you test a physical hypothesis or formalize a logical inference. There’s no experiment, model, or predictive structure that tells us whether “baby torture is wrong” is a metaphysical fact or just a deeply shared psychological aversion. You’re claiming parity where there’s a methodological gap.

As for the claim that critics of intuition rely on intuitions too: there’s a difference between relying on formal coherence (e.g., basic logical tautologies) and on moral gut feelings. The probability example confuses things, as Bayes' theorem and the conjunction rule aren't known by intuition but by mathematical derivation, and our confidence in them comes from their internal consistency and predictive accuracy, not how they “feel.”

I'd also like to go into the last two big topics you propose, i.e., evolutionary debunking arguments and physicalism, but this post is already too long, and probably not conducive to a conversation.

Evander H. 🔸 @ 2025-06-26T10:40 (+6)

90% agree

The consciousness argument:

The personal identity argument: 

Why this supports objective morality:

Philosophical honesty:

Rafael Ruiz @ 2025-06-25T10:35 (+6)

Morality is Objective

(Vote Explanation) Morality is objective in the sense that, under strong conditions of ideal deliberation (where everyone affected is exposed to all relevant non-moral facts and can freely exchange reasons and arguments) we would often converge on the same basic moral conclusions. This kind of agreement under ideal conditions gives morality its objectivity, without needing to appeal to abstract and mind-independent moral facts. This constructivist position avoids the metaphysical and epistemological problems of robust moral realism, while still grounding moral claims in terms of justification.

(Although their views are not exactly the same, I take this view to be aligned with the metaethical views of philosophers Christine Korsgaard, Sharon Street, Philip Kitcher, and Jürgen Habermas. https://plato.stanford.edu/entries/constructivism-metaethics/ )

mercury @ 2025-06-25T01:52 (+6)

Morality is Objective

Two claims, one: moral facts do not exist, two: if they did, you could ignore them without making truth broadly your enemy.

One:
Some seemings can be cross verified extensively, such that you end up either believing in those seemings or ending up incapable of functioning in the world. For instance, if it seems like a table exists to my visual perception, this is extremely cross verifiable. If I believe murder is wrong, this is not cross verifiable at all.

If I say "there is not a table here", then to maintain this position I have to hold that all of my senses deceive me constantly, and deceive me when they purport to show other measuring instruments detecting the table, and so on until I am doubting all methods of measuring anything in the world. So either I live in a world where there is a table, or I live in a world that is beyond my ability to meaningfully observe, and I opt to ignore the possibility I am in a world I cannot meaningfully observe.

If I say "there is a fact that jaywalking is wrong", what new predictions can I make over saying "people believe that jaywalking is wrong"? "People will try to stop jaywalkers", "people will be upset about jaywalking", and so on are all predicted under both theories. If there is a fact that jaywalking is wrong, I might imagine that someone could prove that fact, or measure it, but nobody has. Absence of evidence is weak evidence of absence, and I did not start out believing in moral facts to begin with.

Two:
If moral facts were established, it is unclear to me why I would have a reason to care. If you produced before me an infallible machine that told me "it is in all cases immoral for anyone or anything to ever be happy", I don't see why I would have any cause to care about this fact. It seems completely unrelated to facts about what things make me happy, facts about my preferences, and facts about tables and chairs and protons, so it seems like I can ignore this fact and go about my life as normal. I could even decide to maximize immorality (happiness) in response to this claim, without running into logical problems.

Normally when you ignore true things, things you disprefer might happen to you. You might stub your toe on a table that you were ignoring. And if you try to defend your false belief, you may have to contort more and more of your world model, until you believe all sorts of false things, which makes it hard to function effectively in the world. Like that there is an enormous conspiracy by all major furniture companies and consumers and banks to pretend that tables are real.

With moral facts, this does not seem to be the case. I see no route where I start by denying that "jaywalking has an innate property of evil", and end up having to deny that protons exist.

Style Note:
I chose to use jaywalking as my example of something immoral. I apply the exact same reasoning to arbitrarily emotionally loaded concepts. I prefer not to invoke those concepts when they are not necessary.

Other Notes:
Companions in guilt is unsuccessful because there aren't normative facts in epistemology either, evolutionary debunking is successful because yes, it really is extremely plausible that people would end up packaging a sense of disgust/dislike/dispreference at antisocial acts as a property of the act instead of a property of their psychology, more plausible than them being wrong that there is a qualitative experience of love, and more plausible than there being mind independent reasons for action.

ThomasEliot @ 2025-06-25T15:01 (+5)

This is a deeply unconvincing post.

It just seems to be a brute intuition—one that I don’t share.

Indeed. 

The central focus on "torturing babies" being objectively wrong (and the not particularly subtle hidden basis that this is all due to the existence of God) is particularly odd as a choice in this forum, which is disproportionately Jewish, where by halakhic law people are required by God to circumcise male babies (and is also just common among Americans in general). 
 

The view that these statements are neither true nor false has unique linguistic problems. Proponents claim that moral sentences are like commands—they’re not even in the business of expressing propositions. If I say “shut the door,” or “go Dodgers,” that isn’t either true nor false. But because of that, it makes no sense to ask “go Dodgers?” or “is it true that shut the door?” Similarly, it makes no sense to say “if shut the door then shut the door now, shut the door, therefore, shut the door now.” But it does make sense to say things like “is abortion wrong?” or “if murder is wrong, then so is abortion.” This shows that moral statements are, at least in many cases, in the business of expressing propositions—asserting things supposed to be true or false.


I'm genuinely surprised that anyone continues to present the argument from grammar. The obvious implied part of those sentences if "I would prefer it if you" before "shut the door". "Is it true that I would prefer it if you shut the door" makes obvious sense, as does "if I would prefer it if you shut the door then I would prefer it if you shut the door now, I would prefer it if you shut the door, therefore, I would prefer it if you shut the door now". "Is abortion wrong?" becomes "do you prefer if people not abort?", "if murder is wrong, then so is abortion" becomes "if you would prefer that people not murder, then I would prefer it if you also prefer people not abort", et cetera.

These objections are both obvious and well known.  Neglecting to address them speaks to the seriousness of this level of engagement with the counterarguments - a pattern I see consistently from proponents of moral realism. 

 

ThomasEliot @ 2025-06-25T14:58 (+5)

Morality is Objective

The arguments presented in this essay are neither novel nor convincing and rely on intuitions the author holds that I do not and that the author does not justify

 

edit: I did not expect this aspect of my vote to become a comment

McBrian16 @ 2025-06-25T00:33 (+5)

Before reading the article: The argument I often hear in support of moral realism appeals to moral experience, but moral experience seems totally consistent with moral anti-realism being true. I don't know what evidence for moral realism would look like even if it were to exist, but that would just mean that there's no reason to prefer either view (anti-realism vs. realism).  

After reading: This Cuneo-style argument that Matthew used in this writing is interesting. I forgot about the bad company argument that I learned about several years ago. Don't I lose epistemic facts if I lose moral facts? Maybe so. It seems self-refuting to argue that people should be indifferent to purported epistemic facts; if you want people to clearly assess the merits of your argument against taking epistemic facts seriously, you seem to need them to have rationality. I don't think you'd want them to misrepresent your argument against epistemic facts, use an ad hominem as justification for rejecting it, etc.

LanceSBush @ 2025-06-25T18:33 (+2)

(I don't take myself to be necessarily be disagreeing with you, just addressing the same topic). The problem with moral realism has never been the morality part; it's the normativity part. This is why I endorse global normative antirealism, i.e., the position that there are no stance-independent normative facts at all, including moral, epistemic, and so on.

Companions in Guilt arguments don't strike me as compelling at all because I don't see any more reason to think epistemic realism is true than that moral realism is true. So someone saying that if I want to reject one, I have to reject both doesn't cause me to pay any cost at all: I already reject both on independent grounds, anyway.

As far as losing epistemic facts: Epistemic antirealists don't deny there are "epistemic facts," they only deny that there are stance-independent epistemic facts. Nothing about epistemic antirealism prevents you from thinking there are epistemic facts, or better and worse ways of acquiring true or justified beliefs

McBrian16 @ 2025-06-25T20:15 (+2)

It’s objectively good to see you here, Lance!

McBrian16 @ 2025-06-26T02:48 (+1)

I guess my concern is if I said, "This depends on your stance on what counts as an epistemic fact, but you should accept the conclusions of a sound argument," what prevents someone from saying, "Well, if it's stance-dependent, then I'm totally justified in accepting unsound arguments."?  It seems a person would be equally as justified in accepting unsound arguments as they do sound ones.

LanceSBush @ 2025-06-26T20:56 (+2)

Nothing prevents someone from saying that. But nothing would prevent someone from saying that even if epistemic realism were true. 

Let's say for a moment epistemic realism was false. What would you do? I'd do exactly what I currently do. It's already important to care about what's true, and there will be consequences for you if you ignore what's true. The same is true for everyone else. Nothing would change. I don't think the truth of epistemic realism would have practical consequences at all.

McBrian16 @ 2025-06-26T21:37 (+2)

Interesting. I'll have to think on this. Thanks for your comments!

Arepo @ 2025-06-28T03:41 (+4)

(Deleted my lazy comment to give more colour)

Neither agree nor disagree - I think the question is malformed, and both 'sides' have extremely undesirable properties. Moral realism's failings are well documented in the discussion here, and well parodied as being 'spooky' or just wishful thinking. But moral antirealism is ultimately a doctrine of conflict - if reason has no place in motivational discussion, then all that's left for me to get my way from you is threats, emotional manipulation, misinformation and, if need be, actual violence. Any antirealist who denies this as the implication of their position is kidding themselves (or deliberately supplying misinformation). 

So I advocate for a third position.

I think the central problem with this debate is that the word 'objective' here has no coherent referent (except when people use it for silly examples, like referring to instructions etched into the universe somewhere). And a noncoherent referent can neither be coherently asserted nor denied. 

To paraphrase Douglas Adams, if we don't know what the question is, we can't hope to find an understandable answer.

I think it's useful to compare moral philosophy to applied maths or physics, in that while there are still open debates about whether mathematical Platonism (approximately, objectivity in maths) is correct, most people think it isn't (or, rather, that it's incoherently defined) - and yet most people still think well-reasoned maths is essential to our interactions with the world. Perhaps the same could be true of morality.

One counterpoint might be that unlike maths, morality is dispensable - you can seemingly do pretty well in life by acting as though it doesn't exist (arguably better). But I think this is true only if you focus exclusively on the limited domain of morality that deals with 'spooky' properties and incoherent referents.

A much more fruitful approach to the discussion, IMO, is to start by looking at the much broader question of motivation, aka the cause of Agent A taking some action A1. Motivation has various salient properties:

For example, many of us might choose to modify our motivations so that we e.g.:

I would argue that some - but not all - of these modifications would be close to or actually universal. I would also argue that some of those that weren't universal for early self-modifications might still be states that iterated self-moderators would gravitate towards. 

For example, becoming more 'intelligent' through patient thought might cause us to focus a) more on happiness itself than instrumental pathways to happiness like interior design, and b) to recognise the lack of a fundamental distinction between our 'future self' and 'other people', and so tend more towards willingness to help out the latter.

At this point I'm in danger of aligning hedonistic/valence utilitarianism to this process, but you don't have to agree with the previous paragraph to accept that some motivations would be more universal, or at least greater 'attractors' than others while disagreeing on the particulars. 

However it's not a coincidence that thinking about 'morality' like this leads us towards some views more than others. Part of the appeal of this way of thinking is that it offers the prospect of 'correct' answers to moral philosophy, or at least shows that some are incorrect - in a comparable sense to the (in)correctness we find in maths.

So we can think of this process as revealing something analogous to 'consistency' in maths. It's not (or not obviously) the same concept, since it's hard to say there's something formally 'inconsistent' in e.g. wanting to procrastinate more, or to be unhappier. Yet wanting such things is contrary in nature to something that for most or all of us resembles an 'axiom' - the drive to e.g. avoid extreme pain and generally to make our lives go better.

If we can identify this or these 'motivational axiom(s)', or even just find a reasonable working definition of them, this means we are in a similar position as we are in applied maths: without ever showing that something is 'objectively wrong' - whatever that could mean - we can show that some conclusions are so contrary to our nature - our 'nature' being 'the axioms we cannot avoid accepting as we function as conscious, decision-making, motivated beings' that we can exclude them from serious consideration. 

This raises the question of which and how many moral conclusions are left when we've excluded all those ruled out by our axioms. I suspect and hope that the answer is 'one' (you might guess approximately which from the rest of this message), but that's a much more ambitious argument than I want to make here. Here I just want to claim that this is a better way of thinking about metaethical questions than the alternatives. 

I've had to rush through this comment, but I'm making 2.5 core claims here:

  1. One can in principle imagine a way of 'doing moral philosophy' that excludes some set of conceivable moralities
  2. That a promising way of doing so is to imagine what we might gravitate towards if we were to iteratively self-modify our motivations
  3. That a distinct but related promising way of doing so is to recognise quasi- or actually-universal motivational 'axioms', and what they necessitate or rule out about our behaviour if consistently accounted for

I don't know if these positions already exists in moral philosophy - I'd be very surprised if I'm the first to advocate them, but fwiw I didn't find anything matching them when I looked a few years ago (though my search was hardly exhaustive). For want of distinguishing it from the undesirable properties of both traditional sets of views and with reference to the previous paragraph, I refer to it as 'moral exclusivism'. 

Obviously you could define exclusivism into being either antirealism or realism, but IMO that's missing its ability to capture the intuition behind both without necessitating the baggage of either.

Toby Tremlett🔹 @ 2025-06-26T09:09 (+4)

Morality is Objective

 

I don't see how something like morality could be objective. I can't imagine what it would look like for someone to convince someone else that an action was wrong, for a reason that isn't stance-dependent to them (i.e. a reason they don't already find at least partially compelling). 

When I was reading much more about this, this made me sympathetic to something like Sharon Street's humean constructivism. In short (and from memory) it's the view that we can't avoid feeling reasons for actions (if you see a truck driving towards a child for example, you feel a reason to help them), and also that we want consistency. So morality is just kind of there in our responses to the world, and the work of figuring out what is right is the work of reasoning about the reasons we feel to make them more consistent. 

This does lead to the idea that you can't say "you're wrong" to an 'ideally coherent caligula' i.e. someone who took themselves as having reasons to torture for fun, and in fact were correct about that - in other words, on reflection they would indeed still have reasons to torture. I think this would appear pretty gross to Bentham's Bulldog, but I bite the bullet. I don't think you can honestly say the ideally coherent caligula is 'wrong', but you can obviously say "we're locking you up". 

I'm a little reluctant even to accept this account though, because I'm not sure whether I fully accept that I take myself as having "reasons" to act when I respond to specific circumstances. This is clearly the part of the argument where the more cognitivist elements are smuggled in, and I don't know whether I agree with that smuggling. 

Vasco Grilo🔸 @ 2025-06-25T18:37 (+4)

I see morality as objective because positive conscious experiences are objectively good, and negative conscious experiences are objectively bad. Then there is subjectivity in figuring out what increases expected total hedonistic welfare.

Pablo @ 2025-06-25T13:23 (+4)

Morality is Objective

I do not like the expression ‘Morality is objective’, because it comprises both claims I'm very confident are not objective (“You ought not to kill”) and claims I'm very confident are objective (“Suffering is bad”). More generally, I am a moral anti-realist by default, but am forced to recognize that some moral claims are real—specifically, certain axiological claims—because their objective reality is revealed to me via introspection when I have the corresponding phenomenal experiences (such as the experience of being in agony).

Bentham's Bulldog @ 2025-06-25T14:45 (+2)

Moral realism is just the idea that some moral propositions are objectively true, not that all of them are true. 

Pablo @ 2025-06-25T17:36 (+2)

Sure, who could possibly believe that all moral propositions are objectively true? My point was that moral realists typically believe that some axiological and some deontic claims are objectively true, and that if you are an anti-realist about the former and a realist about the latter, calling yourself a “moral realist” may fail to communicate your views accurately.

Peter Wildeford @ 2025-06-24T19:50 (+4)

Morality is Objective


People keep forgetting that meta-ethics was solved back in 2013.

Richard Y Chappell🔸 @ 2025-06-25T01:37 (+11)

fwiw, I think the view you discuss there is really just a terminological variant on nihilism:

The key thing to understand about hypothetical imperatives, thus understood, is that they describe relations of normative inheritance. “If you want X, you should do Y,” conveys that given that X is worth pursuing, Y will be too. But is X worth pursuing? That crucial question is left unanswered. A view on which there are only hypothetical imperatives is thus a form of normative nihilism—no more productive than an irrigation system without any liquid to flow through it.

To get any real normativity out of hypothetical imperatives, you need to add some substantive claims about desirability, or what ends are categorically worth pursuing.

Until then, we’ve just got a huge array of possible normative systems channeled out. We can “hypothetically” predict and compare racist normative advice, anti-racist normative advice, and so on for infinitely many other possible ends. But we need something more to break the symmetry between them and yield actual reasons to do one thing rather than another. To get any concrete advice, we need to fill out just one of the possible channels as the one to follow.

Peter Wildeford @ 2025-06-25T14:25 (+3)

"Nihilism" sounds bad but I think it's smuggling in connotations I don't endorse.

I'm far from a professional philosopher but I don't see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don't debate racist normative advice is because we're not racists. I don't see any other way to determine this.

Richard Y Chappell🔸 @ 2025-06-25T14:37 (+11)

Distinguish how we determine something from what we are determining.

There's a trivial sense in which all thought is "subjective". Even science ultimately comes down to personal perspectives on what you perceive as the result of an experiment, and how you think the data should be interpreted (as supporting some or another more general theory). But it would be odd to conclude from this that our scientific verdicts are just claims about how the world appears to us, or what's reasonable to conclude relative to certain stipulated ancillary assumptions. Commonsense scientific realists instead take our best judgments to reflect fallible verdicts about a mind-independent truth of the matter. The same goes for commonsense moral realists.

Bentham's Bulldog @ 2025-06-24T17:56 (+4)

Morality is Objective

Um, see above :)

NickLaing @ 2025-06-26T06:55 (+3)

I'm a Christian so well, bit of a slam dunk in the same direction OP...

I somewhat struggle to understand how objective morality works if you are not a theist on religious in some sense, but I'm not a moral philosopher so I'll be missing a lot.

Even though I think morality is objective, most likely I'm as long long way from that true North Star. I shudder to think what percentage I'm right with my own current ideas about morally, especially when I look back and see how much my ideas of morality have changed during my life. 

Toby Tremlett🔹 @ 2025-06-26T13:18 (+4)

Not sure that being christian/ theist is a slam dunk, for reasons that start with euthyphro... i.e. Is God following an already objective morality (in which case being theist doesn't help - morality would be the same whatever God did), or does God somehow make a particular morality objective (in which case you can still ask - why that one rather than another?). 

NickLaing @ 2025-06-26T18:47 (+5)

Yeah that's a good question. I would have thought both of those options would mean that morality was objective still? Even if the objectiveness seemed pretty arbitrary and we could ask why.

From my perspective the morality stems from who God is, and is baked into the fabric of the universe. From a biblical perspective on one level God "is" love, and truth, and wisdom, and the universe came out of that objective reality. Then there is no other morality that exists. 

Understanding and living by that morality though is a while nother matter.

I feel a bit like a kindergarten student here, to I should probably just be quoting some boss theologian...

Toby Tremlett🔹 @ 2025-06-27T07:35 (+4)

Yeah that's a good question. I would have thought both of those options would mean that morality was objective still? Even if the objectiveness seemed pretty arbitrary and we could ask why.

Oh yeah for sure, I think my argument (if right) means more that being a Christian/theist doesn't let you skip any steps, you still have to argue why morality is objective. 


But I think the God = love, truth, wisdom = objective reality thing might sneak you out of Euthyphro. Not sure, and also not a boss theologian lol. 

Concentrator @ 2025-06-25T14:06 (+3)

Morality is Objective

 

You're selling me a horse and you tell me "this horse is fast". I look at the horse and say "yeah, it looks like it would be a pretty fast runner compared to other horses". To which you reply "no, no, it's not about what you think makes it fast. This horse is fast. Fact. It would be fast even if it were the only horse ever. Even if nobody thought it was fast... it would be."

And you go on to tell me that because the horse looks fast, I should believe it is "objectively" fast.

Now I do presume that the horse is fast, in that it appears likely to be what I and most everybody would consider to be fast for a horse. But that's not the meaning that your words convey. Which is instead that I should regard the horse as "fast" based on what the word "fast" signifies and, I can only assume, by reference to some unspecified basis, something which is of a kind that doesn't rely upon our personal views or values, and which universally governs what counts as "fast" in this context. But if there is such a basis... it is not making any appearances.


BB's intuitions

What's with the horse thing? Well, if I understand BB's underlying pro-realism argument correctly, he's suggesting that we should presume that there are stance-independent moral facts, that we should maintain that belief unless we have cause to update our beliefs, and that there isn't sufficient cause for an update.

I may presume that the horse is fast based on my conception of what "fast" should mean for a horse, but there's no reason to presume that the horse is fast in the stance-independent sense without first seeing even a hint of something that might lead to an objective standard for horse fastness (which doesn't exist).

It appears that torture is wrong? Ok, I can accept that as the basis for an initial presumption that torture is morally bad. That's not the same thing as a presumption that it is stance-independently morally bad. Does torture appear stance-independently wrong? No! The opposite. We might even say that it seems extremely intuitive that torture appears wrong precisely because we already have some stance about the infliction of pain without good cause being wrong (or the like).

 

Particular responses

 But this means that moral anti-realists must think that you can never have a reason to care about something independent of what you actually do care about. 

Kinda. I only care about reducing suffering (etc.) because I already actually do care about whether people suffer (etc.) and other factors relevant to people's opportunities for fulfillment. If I didn't have those stances... then I wouldn't place a high value on reducing suffering unless I had some *other* stances that led me to value reducing suffering.

1. A person wants to eat a car. ... On moral anti-realism, they’re not being irrational. They have no reason to take a different action.

Irrationality doesn't come into it unless the person is proceeding in a way that is at odds with their objectives. You don't have to enjoy something to prefer it over something else. Maybe eating the car would save a billion lives.

Wanting to eat a car would be irrational if I valued not going through that ordeal more than I valued my motivation(s) to eat the car.

 

General responses

All it would take to conclusively establish moral realism would be to identify a single moral statement that can be demonstrated to be true without reliance upon any person's stance. That would be setting the bar unfairly high though... let's lower our expectations to merely showing that at least one such statement appears likely to be true to a similar standard as what we apply for other kinds of facts that we regard as established facts.

For example, if you say there’s a table then we might presume that it’s true but we’re not going to treat it as an established fact just because you say you saw it. But we don't need very much before we will regard it as a fact.

That should be doable, if the following is correct:

Moral realists aren’t special pleading. We believe in moral facts for the same reason that we believe in any other basic kind of fact.

You can say “torturing babies for fun is wrong” and we can easily establish that in the stance-dependent sense so long as we have adopted a suitable stance, e.g., that causing suffering without critical need is morally bad. One thing follows from the other (more or less).

But how do you establish that to a suitable standard without employing any basic stance about what moral conduct should or shouldn’t entail?

That’s the case that I think moral realists need to make. 

 

Analogies

Say that everybody agrees that there is a table in a room and it is blue. The colour of the table could be regarded as an established fact. There is a colour-categorisation-stance involved as everybody must first have at least some idea about what shades/wavelengths “blue” can correspond to.

But... there are some shades that are so clearly within the bounds of what we mean by "blue" that if you didn't consider them "blue" then you'd have to be referring to a different concept than everybody else. For those shades, the stance is inherent to the concept we have in mind. We woudn't have to import any of our own stances. We can have "real" facts about "blue" propositions.

Is the table reflecting light within the range that humans regard as "blue"? Yes. It really is doing that. Is there really a concept called "blue"? Yes. Does it correlate directly with some phenomenon that occurs separately to human perception? No, it's not real in that sense. It's a human construct. There is no "blue" unless we conceive of it, there are only things with characteristics that for some values universally fit what we refer to as blue.

In the same way, if "morally bad" had a universal connotation that necessarily meant that, say, the act of dashing infants against rocks must always fit the definition otherwise you couldn't possibly be referring to same thing that  everybody else means when they say "morally bad", then that would be an example of a stance independent moral fact... Alas, that's not the case. 

If a religion says that dashing infants is not always morally bad, they're still talking about the same idea. Our conception of "morally bad" is more abstract than "blue". 

Which rules out any examples based solely on definitions, like we can have for "blue" and for numbers.

Is there some other way that the moral realist can get from the abstract concept of "bad" to a specific "this is bad" without presupposing that some things are bad or presupposing anything about what makes things bad?

Falk Lieder @ 2025-06-25T01:22 (+3)

Morality is Objective

I believe that the purpose of morality is to promote everyone's well-being. We can use the scientific method to determine how each action, rule, trait, and motive affects overall well-being. Science is objective. Therefore, it is possible to make objective statements about the morality of actions, motives, and traits.

KonradKozaczek @ 2025-06-24T23:33 (+3)

Morality is Objective

Suffering never fails to be undesirable, even if only individual subjects experience it as such. The redness of red is real, even if not everyone perceives red at the same time.

LanceSBush @ 2025-06-25T18:36 (+2)

If "suffering" never fails to be undesirable, perhaps this is because we are simply stipulating that suffering must be undesirable, so any state one doesn't find undesirable isn't an instance of suffering. If we're not doing that, then I'd take it we need some kind of non-tautological account of suffering and then we'd need to show it's never undesirable...but that looks like an empirical question, and I don't think there's any data that supports the notion that suffering is always undesirable.

As far as the redness of red: I endorse qualia quietism; I don't think there is any redness to red and I don't even think such remarks mean anything in particular.

Conlan @ 2025-06-24T19:01 (+3)

Morality is Objective

Subjective in the sense that there is no inherently 'goodness' quality to things, but objective in the only useful sense that humanity and maybe sentient life can have better or worse experiences according to a fundamental baseline.

Osty @ 2025-06-24T18:56 (+3)

You've written elsewhere that you think even ostensibly good or bad actions only have like a ~50.01% chance of actually having good or bad outcomes in the long run due to the butterfly effect. So let's say that a particular instance of torturing someone, due to a long, unexpected chain of cause and effect, ultimately leads to a flourishing future for everyone. So in your view, was this action then objectively morally right? Or do you say no, since it was bad in expectation, it was still objectively morally wrong despite its ultimate consequences?

If you go with the latter, which I suspect you will, then you must admit that actions themselves cannot be evaluated as right or wrong in a vacuum - you must also consider how knowledgeable the actor is about the consequences of their actions. Whether something is good or bad in expectation depends on your credence in various possible outcomes of the action. If the act of torture was committed by Laplace's demon himself because he really did know it would ultimately lead to a flourishing future, then it would be morally right.

Now obviously nobody can reasonably be expected to have Laplace's demon level of accuracy in their predictions of the consequences of all of their micromovements millions of years into the future. But short of that, what level of knowledge should people be expected to have? And can we ever blame them for making the wrong judgment because they lacked sufficient knowledge? I can imagine someone who is careless and just goes with their gut instinct all the time (in which case it feels appropriate to blame them even if their action was good in their expectation), or I can imagine someone who is thoughtful and reflective about their actions and does thorough research to understand the consequences as fully as possible.

You can say, of course, you should be thoughtful and reflective and do research, but how far are you willing to go with that? There are a bunch of morally gray choices where it is unclear what will lead to the best consequences but which you could theoretically eke out a few extra percentage points of confidence in the right answer by doing extensive research about it. Should everyone be expected to perform exhaustive research across many domains of knowledge before ever taking an action? Eventually, the cost of doing this research itself must be taken into account - at what point are you allowed to pull the trigger and take the action? Are the answers to these questions really objective, fundamental features of reality? That just seems bizarre to me.

Bentham's Bulldog @ 2025-06-24T21:14 (+2)

There's a distinction between subjective rightness and objective rightness (these are poor terms given that they're both compatible with using moral realism).  I'd say that if you torture someone thinking it will be bad but it turns out good, that was subjectively bad but objectively good.  Given what you knew at the time you shouldn't have done it but it was ultimately for the bets.

Osty @ 2025-06-24T22:00 (+1)

Ok, but this still leaves unanswered the question of whether and to what degree you have a moral obligation to become better informed about the consequences of your actions. Many people are blissfully unaware of what happens in factory farms. Are they doing nothing (subjectively) wrong, or is there a sense in which we can say they "should have known better"? Can I absolve myself of subjective wrongness just by being an ignoramus?

Bentham's Bulldog @ 2025-06-24T22:10 (+2)

They're doing nothing subjectively wrong if they really don't know.  But if they knowingly don't look into it then they're a bit blameworthy. 

tobyj @ 2025-06-24T16:41 (+3)

I just wanted to comment to say I'm very confused about this question/framing - I tried to figure out why and I think it's something to do with uncertainty about what "objective" even means. 
Wondering if anyone has a good exploration of what it means for a thing to be objective?

(My intuition is that I have a "sense of the objective" and that that is pointing at "things I anticipate other people will agreeing with", or "things I'd feel annoyed/crazy if people contradicted", - this would point to there being at thing that is objective morality, but is argued from a subjective frame so I'm not sure that's right - so is there an objective definition of objectivity?)

Bentham's Bulldog @ 2025-06-24T17:58 (+5)

Objective just means that its truth doesn't depend on what people think about it.  The Earth being round is objective--even if everyone thought it was flat, it wouldn't be. 

JackM @ 2025-06-27T23:22 (+2)

But this means that moral anti-realists must think that you can never have a reason to care about something independent of what you actually do care about. This is crazy as shown by the following cases:

  1. A person wants to eat a car. They know they’d get no enjoyment from it—the whole experience would be quite painful and unpleasant. On moral anti-realism, they’re not being irrational. They have no reason to take a different action.

I think the person wanting to eat a car is irrational because they will not be promoting their wellbeing by doing so and their wellbeing is what they actually care about.

So the reason not to eat the car isn't stance-independent—it's based on their own underlying values.

What is the reason not to eat the car that isn’t grounded in concern for wellbeing or some other value the person already holds?

Robi Rahman @ 2025-06-26T13:10 (+2)

Morality is Objective

There's no evidence of this, and the burden of proof is on people who think it's true. I've never even heard a coherent argument in favor of this proposition without assuming god exists.

SamSklair @ 2025-06-25T22:32 (+2)

Morality is Objective

The sense of moral objectivity that I'm currently sympathetic to is some kind of constructivist account like the one introduced by John Rawls.

I think morality needs to be objective in some sense so that we can use it to resolve conflicts between different sets of values that different people hold. We need to construct moral principles so that we can resolve disputes between different subjectively held sets of values and live together in ways that are as mutually advantageous as possible. I would define "advantageous" in terms of values and preferences, or perhaps some idealised form of them, so that would make morality stance dependent in a sense, but I still think it would warrant being called objective. 

I would not think of morality as objective in the sense that there are moral facts and properties that exist independently of the judgments and attitudes of subjects. Rather I would think of it as objective in the sense that we can't simply make our own subjective values the standard by which to make moral judgments when engaging in moral discourse and inquiry. We need to justify our moral views to others, explaining why our views are correct/better and why opposing views are incorrect/worse. This is necessary to ensure that there is a common subject matter that we can think and talk about together to solve practical problems associated with living together. We need a commonly agreed upon set of norms and values that we can use to criticise and adjudicate between different subjectively held norms and values.   

I would need to flesh this out but that's roughly where I am at the moment in this debate.

Regarding realist accounts: One reason I am sceptical of non naturalist moral realism is that I don't think you can infer metaphysical truths from intuitions. And I think intuition is the main reason people are non-naturalists.

Even if you do have intuitions that entail the truth of non-naturalist moral realism (which I'm unsure most people even do) this would just tell you how you are disposed to think about morality. Why think that your intuitions give you access to mind-independent irreducibly normative facts? Why think your psychological states are in any way responsive to these facts or to anything that could inform you of them? I think this would apply to other Platonist-type views about things like mathematics as well.

One reason I'm sceptical of naturalist moral realism is that I don't think moral/normative properties are the sorts of things we do/could observe or need to posit to explain what we observe

You could give some naturalistic reduction of moral/normative properties, but then I think you're just changing the subject.

Devin Kalish @ 2025-06-25T14:05 (+2)

I think "morality" as we discuss it and as I use it has many realish properties - I think things would be good or bad whether or not moral agents had ever come to exist (so long as moral patients did), I think we can be uncertain about which theory of ethics is "right" to begin with, and I don't think the debate to resolve this uncertainty is ultimately semantic. I think ethics has most of the stuff real things have except for the "being real" part.

 

I'm not super confident on this, but I note that most sorts of explanations of what ethics is either fall into the category of dubious empirical predictions "ethics is the one theory all rational beings would converge on given enough time and thought", or a muddled version of just restating a normative ethics theory "ethics is some hypothetical ideal contract between distinct agents, or what is good for all beings taken together".

 

Maybe more personally, I think that any explanation of what we mean by "objective ethics" would have to be something that, if we programmed a perfect superintelligence to determine what the correct answer to it was, I would be satisfied deferring to whatever answer it gave without further explanation. To borrow/restate a thought experiment of Brian Tomasik's, if a perfect "ethicsometer" told me that the correct ethical theory was torturing as many squirels as possible, I would have just learned that I don't care about ethics. I would go further than this though and say that the ethicsometer had failed to even satisfy what I mean by "ethics". I've been recommended Simon Blackburn's work on this, it seems possible I have a view most like what he calls "quasi-realism".

Tobias Häberli @ 2025-06-25T10:26 (+2)

Morality is Objective

I'm pretty confused about this, but currently I look at it something like this: 

Moral sentences state beliefs whose validity doesn't depend on whether or not anyone approves of them. These beliefs are about facts in the world, the same world that physics describes, just different aspects of it. 

Epistemically, I am a coherentist: a belief is more justified the better it fits within the most explanatorily coherent system of beliefs. I see reflective equilibrium as a useful method for approaching that coherence. 

In physics, we have a dense and well-corroborated network of beliefs, supported by prediction and intervention. Ethics, by contrast, has a thinner and more contested network, and lacks comparably strong validation tools. So, my confidence in particular moral propositions, and in moral realism itself, is correspondingly weaker. 

Still, I treat improvements in moral theory (better explanations, resolution of paradoxes, maybe convergence across cultures) as evidence that we're tracking real features of the world, just as progress in physics suggests we're tracking reality. The quality and coherence of our moral theories should inform how confident we are in moral realism. 

I'd put my credence around 60%. Coherent moral theories face enough external constraints and show some explanatory success that I slightly lean toward thinking they track real features of conscious beings and their interactions.

Bella @ 2025-06-25T09:53 (+2)

Morality is Objective

 

I was unable to come up with a non-assertive, non-arbitrary-feeling grounding for moral realism when I tried very hard to do this in 2021-22. 

 

My vote isn't further towards anti-realism, because of:

Bentham's Bulldog @ 2025-06-25T14:48 (+2)

How is this different from, say, the external world?  Like, in both cases you'll ultimately ground out at intuitions, but nonetheless, the beliefs seem justified. 

ThomasEliot @ 2025-06-25T19:55 (+1)

No? We can test for things like object permanence by having person A secretly put an object in a box without telling person B what it is, and then person B checking the box later on while person A is not there and seeing what's inside of it and seeing if their reports match.

Rabbitball @ 2025-06-24T20:17 (+2)

Morality is Objective

If morality is subjective, there is nothing that promotes love over hate, peace over war, etc. apart from what we think. And so someone who thinks war is moral IS CORRECT under subjective morality, while another person who things the same war is immoral IS ALSO CORRECT.

LanceSBush @ 2025-06-25T14:18 (+2)

What we think is good enough. We don't need the approval of the universe to oppose hate or favor peace over war. We can act on our values. Why would I care at all if hate or war were objectively bad? Suppose moral realism was not true. Would you care any less about opposing war or hate? I'm an antirealist, and I doubt I care any less about anything of practical significance as a result.

Lloy2 @ 2025-06-28T00:56 (+1)

The most objective thing about morality (especially utilitarianism) is that some experiential states are objectively 'better' than others by virtue of their valence and that therefore moral projects, however valid they themselves are, at least take root in something real.

Arslan @ 2025-06-28T00:43 (+1)

yes

Adam Brady @ 2025-06-26T21:04 (+1)

A person desires, at some time, to procrastinate. They know it’s bad for them, but they don’t want to do their tasks. On anti-realism, this is not a rational failing.

I am just picking one of these examples, but an anti-realist could call this a rational failing. People can have many different desires, and thus many different reasons for action, e.g. you could have a desire - and thus a reason - to procrastinate, while at the same time, have a stronger desire - and thus a stronger reason - to work. An anti-realist could say that one is irrational for not doing what one has most reason to do, and in this case, as they have more reason to work than to procrastinate, they are being irrational here. You may say that this impossible, but a "stronger" desires does not have to be defined as being "more" motivational. "Stronger" desires could be understood as, e.g., more persistent desires. 

Manuel Del Río Rodríguez 🔹 @ 2025-06-27T16:53 (+1)

They could, but they could also not. Desires and preferences are malleable, although not infinitely so. The critique is presuposing, I feel, that the subject is someone who knows with complete detail not only their preferences, but their exact weights, and that this configuration is stable. I think that is a first model approximation, but it fails to reflect the more messy and complex reality underneath. Still, even accepting the premises, I don't think an anti-realist would say procrastinating in that scenario is 'irrational', but rather that it is 'inefficient' or 'counterproductive' to attaining a stronger goal/desire, and that the subject should take this into account, whatever decision he or she ends up making .which might include changing the weights and importance of the originally 'stronger' desire.

Beyond Singularity @ 2025-06-26T20:39 (+1)

Thanks for opening this important debate! I'd like to offer a different perspective that might bridge some gaps between realism and anti-realism.

I tend to view morality as something that evolved because it has objectively useful functions for social systems—primarily maintaining group cohesion, trust, and long-term stability. In this view, moral judgments aren't just arbitrary or subjective preferences, but neither are they metaphysical truths that exist independently of human experience. Rather, they're deeply tied to the objective requirements for sustainable group existence.

Some moral norms seem universally valid, like prohibitions against harming children. Why? Because any society that systematically harms its offspring simply can't sustain itself. Other norms, like fairness and autonomy, emerge because they're objectively beneficial in complex groups: fairness keeps group interactions stable and predictable, and autonomy ensures individual concerns are incorporated, helping the system adapt and flourish.

So perhaps morality can be seen as having both subjective and objective dimensions:

In my recent post "Beyond Short-Termism: How δ and w Can Realign AI with Our Values", I explored this idea through the lenses of two key moral parameters—Time horizon (δ) and Moral scope (w)—showing how expanding these parameters corresponds to what we typically view as higher morality.

I'd be curious to hear your thoughts on this functional view of morality: do you think it might bridge the realism–anti-realism divide, or does it fail to capture some key aspect of the debate?

Both Sides Brigade @ 2025-06-26T16:47 (+1)

Morality is Objective

While I don't necessarily think these are all the best arguments out there for moral realism - and I would even reject a few points that Bentham makes, since I'm a naturalist myself - I still think it's a great introduction and I find many of the antirealist responses to these sorts of concerns completely unconvincing. 

Stans To Reason @ 2025-06-26T13:32 (+1)

When you say “objective” here, do you mean epistemic or ontological objectivity? Because it seems like you mean both at different parts of the post. But that’s not fair. You need to stick to one usage. It would be like telling someone to meet you at the bank, and when they ask “do you mean the money place or the edge of the river?”, and you say “both”. 

Ville V. Kokko @ 2025-06-26T13:11 (+1)

Morality is Objective

Moral truths don't semantically work the same way as prototypical truths, and they are not objective the same way, but they are objective in a way that is just as important. I think I will translate an essay I have explaining this and publish it here on the forums.

Chakravarthy Chunduri @ 2025-06-26T03:15 (+1)

Morality is Objective

There are many "moralities". Also, every moral framing is something we have to choose for ourselves and "breathe life into", as it were. Morality is therefore, by definition, subjective.

QuestionableDataOne @ 2025-06-25T23:36 (+1)

Morality is Objective

I used to think morality is objective. However, I think people's perspective of what exactly is morale and isn't morale (the range or spectrum) is much more subjective. How much weight to give to each element? So it depends if we're talking about the concept or theme of morality as a "whole" or a scale of morality. Based on the scale weight, I voted with somewhat disagreeing.

Jackson D @ 2025-06-25T21:53 (+1)

Life has meaning only because we, as extremely biased living creatures, decide to give it meaning.

Josh @ 2025-06-25T18:59 (+1)

Morality is Objective

As an atheist-leaning agnostic, I find Sam Harris’s The Moral Landscape to be the closest approximation of objective morality I have encountered, and it best captures the form of subjective morality I follow. However, I still view morality as a human construct: the flourishing of life is not objectively good, and the suffering of life is not objectively bad.

Jordan Arel @ 2025-06-25T18:32 (+1)

While there are different value functions, I believe there is a best possible value function. 

This may exist at the level of physics, something to do with qualia that we don’t understand perhaps, and I think it would be useful to have an information theory of consciousness which I have been thinking about. 

But ultimately, I believe that in theory, even if it’s not at the level of physics, I think you can postulate a meta-social choice theory which evaluates every possible social choice theory under all possible circumstance for every possible mind or value function, and find some sort of game theoretic equilibrium which all value functions and social choice theories for evaluating those functions and meta-social choice theories for deciding between choice theories converge on as the universal best possible set of moral principles—which I think is fundamentally about axiology; what moral choice in any given situation creates the most value across the greatest number of minds/entities/value functions/moral theories? I believe this question has an objective answer, there is actually a best thing to do, good things to do, and bad things to do, even if we don’t know what these are. Moral progress is possible, real,  not a meaningless concept. 

Gumphus @ 2025-06-25T16:57 (+1)

I lean towards moral realism, but I think the reliance on intuition in a lot of arguments for moral realism is a deep methodological misstep. If a fact seems true to someone and false to someone else, the truth or falsity of that fact is not going to be enough, standing alone, to explain its intuitiveness. 

If I say “X seems true, and the truth of X is a salient explanation for why it seems that way,” this isn’t actually a good or salient explanation until we can also offer some account of why it seems false to others. Perhaps their intuitive faculties are deficient - but really all we could say for sure is that at least one person, you or them, has deficient intuitive faculties. What I find is that, generally, this leads to weird, gerrymandered epistemologies and rhetorically inert positions which depend, for their success, on appeals to intuition which simply may not be there. By my lights, this is the wrong way to argue for a claim - even a true one!

To rely on intuitions regarding contested matters, we must prescribe some methodology of checking the quality of your intuitive faculties against theirs, and even if we can do this reliably, we then need some other methodology that rules out the possibility that both intuitive faculties are somehow broken. I don’t know how to do this, I’ve never met anyone who can, and in any case, I lack the instruments to pull it off.


But casting intuition aside entirely, within this context, I think moral realism can still be salvaged. I think people have normative experiences which can ground moral claims - even if we can’t trust our intuitions that “X is good” seems true, there are experiences we have which seem good directly - namely, enjoyment. And there are experiences we have which seem bad directly - suffering. This is more than just intuition. If I put my hand on a hot stove, regardless of what I might be contemplating at any given moment, I am going to have experiences which lend themselves to the conclusion “I should move my hand from the stove.” The apparent badness of a thing which causes us suffering shifts our normative beliefs and inclines us to dislike it - it shapes our wants, and can create new likes and dislikes where there were none before. If our experiences provided purely descriptive information, this would be wholly unexplainable.
 

But all we really know for sure is that enjoyment seems good, and suffering seems bad - and it then falls to our varying metaethical approaches to explain why these seem the way they do. It doesn’t follow, deductively, that suffering is bad because it seems to be. Lots of things we experience aren’t as they seem. But suffering seems bad to everyone (even if different things cause suffering in different people), and enjoyment seems good to everyone (with the same caveat) - so badness and goodness being universal properties of suffering and enjoyment, respectively, is a plausible, simple, and salient explanation of why they seem to be. 

“Enjoyment is good,” then, is justified in a manner more akin to a scientific theory than a mathematical theorem. We need never attempt to cross the is-ought gap - we sidestep it entirely, the same way we do for claims about gravity or chairs. The goodness of enjoyment is a highly salient explanation for why it seems good. It is an excellent fit to the available evidence. The theory that enjoyment is good is simple and has predictive value - I can predict that future enjoyable experiences will incline me to believe that experiences of that sort should, all else being equal, happen more. And I can predict that future instances of enjoyment will seem good to others too.


At first glance, the main challenge to this approach is evolutionary debunking, which can be held out as an alternate hypothesis that better explains why we are seemed good and bad to. If evolution fully explains the apparent badness of suffering, positing that suffering also by coincidence happens to be bad is pointless and redundant. If I show that the image of a chair is explained by a hologram, there’s no point in also positing that there just by coincidence happens to be a chair where the hologram is.


But I don’t think that evolutionary debunking actually attacks this theory at all. Evolution has tremendous explanatory power and no inherent normative conclusions, but what it explains is why we enjoy and suffer from the things we do, rather than why enjoyment and suffering seem good and bad to us. Evolution explains why, for instance, our bodies are tuned to deliver enjoyment when we eat nutritious food and why we suffer from having our limbs destroyed. Evolution explains the wiring that governs what we enjoy and suffer from. It explains why there are slight variations in how different people are wired. And further, evolution debunks any claim that the set of wiring we presently have is in any sense “correct,” since how we are wired isn’t the result of any sort of virtue-tracking process.

If a conscious person were created from scratch, as a result of some process other than evolution, evolution would make no predictions about what this person would enjoy or suffer from, nor would it make any predictions about how good or bad enjoyment/suffering would seem to them. I predict that enjoyment would still seem good to this person (though I have no clue how we might build such a person) and that suffering would seem bad to them. I would also say that, regardless of what they are wired to enjoy or suffer from, it is good to give them the things they’re wired to enjoy, bad to give them the things they’re wired to suffer from, and good to shape their wiring in a manner which promotes their long-term wellbeing. And I would say the same is true of everything that can experience - you, me, shrimp, aliens - without exception.

Kwvind @ 2025-06-25T14:58 (+1)

Well it's different for each one and each culture have a different way to express it

idea21 @ 2025-06-25T14:54 (+1)

Morality is Objective

Human nature is objective, so human morality must be so too

Samrin Saleem @ 2025-06-25T14:44 (+1)

Morality is objective, but I think some of it changes over time based on new information we uncover and/or changing sentiments, and that change may not always be objective at first

Roman Kozhevnikov @ 2025-06-25T14:34 (+1)

Morality is partly objective. It is hardwired into our brains in terms of aversion to death and the suffering of others, all else being equal. Anyone will slow down if a dog crosses the road, unless it is an emergency.

LanceSBush @ 2025-06-25T18:38 (+1)

In metaethics, "objective" is often another way of saying that moral claims are made true by something other than stances. Even if an aversion to death were hardwired into our brains, this would not entail that morality is objective in the relevant sense of the term as it is used in metaethics.

Roman Kozhevnikov @ 2025-06-25T19:16 (+2)

I suppose we have interpreted the question in slightly different dimensions. I don't think it is a question of position. The existence of morality is an objective fact about people.

LanceSBush @ 2025-06-25T19:21 (+2)

The existence of morality certainly is an objective fact about people: it's an objective fact that people have moral values, make moral judgments, and so on. But that's not with the dispute between moral realists and antirealists is about.

Roman Kozhevnikov @ 2025-06-25T20:34 (+1)

GPT: "The debate between moral realists and antirealists concerns whether moral facts exist objectively (independently of opinions and feelings). Realists think they do, antirealists think they do not."

No part of reality is inherently good or bad. But our consciousness is the vanguard of matter. We are the only possible way for our part of the universe to perceive anything at all. The human mechanism of categorization is the only one available in our corner of the universe. Humans are moral, and so in the observable cosmos, what is bad is what humans are wired to think is bad.

JohnSMill @ 2025-06-25T14:06 (+1)

Morality is Objective

Like this slider- objectivity is a spectrum. The most subjective thing possible is a pure taste satement 'I like ice-cream'. A pure objective statement is '1+1=2'. 

In the world of inter-subjectivity there are statements like 'Democracy is superiour to dictatorship'. This has elements of both objectivity and subjectivity. 

I think morality is an intersubjective agreement (hence the influence of culture) but supported by biological roots (we possess a biological distaste for suffering and injustice, and a biological capacity for abstract reasoning). These intersubjective agreements combined with objective biological dispositions result in something which is not as objective as mathematics or natural sciences, but possesses a degree of objectivity.

Dave Banerjee 🔸 @ 2025-06-25T14:02 (+1)

Morality is Objective

Roughly speaking, I find emotivism to be the most convincing metaethical theory

tonglu @ 2025-06-25T13:56 (+1)

Morality is Objective

Infinite

Tiger Lava Lamp @ 2025-06-25T13:06 (+1)

Morality is Objective

The universe just is. Putting right and wrong on various states is fundamentally subjective. I think that given certain goals or conditions, you could get to objective morality, but those conditions are themselves subjective

ideatician @ 2025-06-25T12:40 (+1)

I'm a multi-level utilitarian with a classical notion of wellbeing. That of a time integral of the value-component of momentary consciousness. "Objectivity" I here interpret as "there being an absolute truth about it", and for this absolute truth I rely on my strong belief that the concept of omniscience is ultimately coherent, although unimplementable.

Telos Defier @ 2025-06-25T11:33 (+1)

Morality is Objective

My impression (admittedly based on limited exposure and minimal formal study of Philosophy) is that moral anti-realists make the mistake of selectively applying scepticism a la Agrippa’s trilemma to moral claims that they don’t apply to non-moral claims. 

A super brief summary of my reasoning is as follows:

-The only empirical information we have is qualia

-Qualia is often valenced as ‘good’ experience and ‘bad’ experience

-Therefore ‘goodness’ and ‘badness’ exist and moral realism is true to the extent that we trust any form of empirical information whatsoever.

Important Clarification:

People asking to define terms like ‘good’ and ‘bad’ lexically (i.e. using a word-based definition rather than an experience) fall prey to Agrippa’s trilemma in the context of lexicography. 

Every word can only be lexically defined by other words and so on until we get infinite regress or circular definitions unless we tilt at some point toward non-lexical definitions e.g. semantics inferred by context. 

Learning semantics via context is how a baby first learns language. This empirical method of learning the meaning of words is how a definition can be obtained without making reference to other words. 

Thus, it simply isn’t necessary to define ‘good’ and ‘bad’ solely in terms of words as long as we can use words to direct our interlocutors to experiences they’ve had of positively and negatively valenced qualia (i.e. pleasure and pain respectively). 

We all have good and bad experiences and thus we have as much certainty that moral realism is true as we can about any empirical claim. Of course, Philosophical Zombies wouldn’t have access to that information so I’d forgive them for being moral sceptics…and philsophical zombies that mechanically act as if they know qualia are real are curious automata indeed!

Christian Gonzalez-Capizzi @ 2025-06-25T11:13 (+1)

Morality is Objective

Universal subjectivism (which gets most of what you want out of realism) or non-naturalism are the best options in metaethics 

Rhyss @ 2025-06-25T10:48 (+1)

"So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong."

Moral realism doesn't say that hitting a baby with a hammer is wrong. Moral realism entails that there is some fact about the morality of hitting a baby with a hammer. Probably, that moral fact is that it is wrong to do this, but moral realism is not a theory about specific moral facts. It's a theory that moral facts are possible. 

This is a pedantic point, but the more commitments you unnecessarily build into moral realism, the more likely someone is to reject it. Someone might be open to there being moral facts, and yet believe that the wrongness of torturing babies isn't one of these facts. If someone like that accepted your claim that moral realists necessarily believe it is wrong to torture babies, they might think, "Oh, I guess I'm not a moral realist then." The belief that it's wrong to torture babies is a promising contender for one of the world's most popular moral beliefs. Still, the less you commit the moral realist to, the more plausible moral realism is going to seem. 

Xylix @ 2025-06-25T09:14 (+1)

Morality is Objective

I find it intuitive that there could be a small set of objective moral facts but much smaller than is generally believed for moral realist positions + i do not think this can be justified rationally to a large degree. I think there can be contextual moral facts (as in "rational agents in a society would agree to cooperate on problem X" or "rational agents would agree to behave in a certain way on a moral problem, given following constraints"), but I do not think these are enough to justify an objective moral realism position.

I think the set of sensible moral views and positions is large, and thus think that morality is mostly not objective.

VeryJerry @ 2025-06-25T03:30 (+1)

Morality is Objective

I think morality is objective in the sense that there as some stable state of the universe with the maximum pleasure over time, which is the morally ideal state, but I don't think we'll ever know exactly how close we are to that ideal state. But it is still an objective fact about the territory, we just don't have an accurate map of it

Ben b @ 2025-06-25T00:02 (+1)

Morality is Objective. I am psychologically unable to believe otherwise.

SummaryBot @ 2025-06-24T17:19 (+1)

Executive summary: This exploratory argument defends moral realism—the view that some moral truths are objective and stance-independent—by asserting that denying such truths leads to implausible and counterintuitive implications, and that our intuitive moral judgments are as epistemically justified as basic logical or perceptual beliefs.

Key points:

  1. Definition and Defense of Moral Realism: The author defines moral realism as the belief in stance-independent moral truths and argues that some moral facts (e.g., the wrongness of torture) are too intuitively compelling to be explained away as subjective or false.
  2. Critique of Anti-Realism's Consequences: Moral anti-realism, the author argues, implies that even clearly irrational behaviors (e.g., self-harm, extreme sacrifice for trivial desires) are not mistaken as long as they are desired—an implication that runs counter to ordinary moral and rational intuitions.
  3. Epistemology of Moral Beliefs: The author analogizes moral intuition to visual and logical perception, claiming that moral beliefs are justified in the same way as foundational beliefs in other domains—by intellectual appearances that seem self-evident unless strongly refuted.
  4. Rebuttal of Common Objections: The post addresses key arguments against moral realism—such as disagreement, the supposed “queerness” of moral facts, and evolutionary debunking—and contends that these objections either misunderstand objectivity or rely on assumptions inconsistent with other accepted non-physical truths (e.g., logical or epistemic norms).
  5. Moral Knowledge and Evolution: The author argues that our ability to access moral truths is best explained by evolution endowing us with rational faculties that can discover such truths, paralleling our capacity to grasp mathematical and logical facts.
  6. Theistic Perspective (Optional): As a supplementary note, the author, a theist, adds that belief in God further supports the idea that humans are equipped to discern moral truths—though this point is acknowledged to carry less weight for non-theists.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.