Discussing ethical altruism and consequentialism vs. deontology

By Richard Y Chappell🔸 @ 2025-07-14T16:44 (+4)

This is a linkpost to https://philosophyandfiction.substack.com/cp/168005453

Crosspost from Philosophy and Fiction

This brief interview nicely captures some of my most distinctive ideas. - Richard Y Chappell

 

“Why should I want to be moral?” Dr. Fischelson asked. “What’s in it for me? Social scorn? Death by hemlock?”

Uri scoffed. “It not a matter of what’s in it for you.”

“Yes, it is,” Joshua objected. “You can’t just say, be moral. You have to say why.”

 

This excerpt from my novel is about as much as I’ve written on consequentialism vs. deontology…let’s just say I’ll take the hard problem of consciousness over such ethical conundrums any day of the week. I may not be brave enough to tackle the big questions in contemporary moral philosophy, but I’m pleased to introduce you to a philosopher who is.

“What is fundamentally worth caring about? What should we do about it?”

These are the questions Richard Yetter Chappell faces head on.


What question initially seems trivial but reveals unexpected depths when seriously examined?

RICHARD YETTER CHAPPELL: Should we want others to act rightly?

You might initially assume that of course we should. But this is only obvious if rightness is determined by what philosophers call "agent-neutral" reasons—reasons that serve impartial goals, like promoting the common good, that are the same for everyone.

Many deontologists instead believe that rightness is "agent-relative", giving the agent in the situation special reasons or goals (e.g. to keep their own hands clean) that others needn't share. (Maybe we should each care about our own clean hands, but not those of that other agent.) If ethics is agent-relative, then impartial bystanders should want others to act wrongly whenever that would better serve agent-neutral goals, such as the common good.

I call this "the curse of deontology", and I think it provides us with strong reasons to doubt that ethics is agent-relative. If instead, as consequentialists believe, it's right to do what best promotes the common good, then we can all reasonably hope that others successfully act rightly.

Do you hold views that contradict the mainstream in your field? What are they, and why is the mainstream wrong?

RYC: Most philosophers think that utilitarianism is a deeply "counterintuitive" view, and that common sense better supports some form of deontology. I believe the opposite. My explanation of where others go wrong is that (i) they focus on assessing superficial deontic verdicts about what agents ought to do in hypothetical situations, rather than deeper telic verdicts about what we ought to prefer or hope to see happen; and (ii) they don't understand that deontic verdicts are decomposable into telic and decision-theoretic components.

Utilitarianism's distinctive claims are telic in nature, and those claims are all entirely commonsensical: of course we should prefer better outcomes! (And as the curse of deontology brings out, other views sound bizarre when they deny this.) Counterintuitive deontic verdicts emerge when you combine utilitarianism with naive instrumentalism: the view that agents should make decisions by doing whatever seems superficially likely to achieve their goals (no matter how machiavellian). But we should reject that view of instrumental rationality as clearly incompatible with what we know about human fallibility, bias, etc. As a result, I think the "mainstream" objections to utilitarianism aren't just unconvincing; they're fundamentally confused and targeting the wrong theory.

If you had to compress your worldview into a single provocative statement, what would it be?

RYC: Vibes are no substitute for systematic thought.

If I'm allowed a more substantive follow-up, I'd add: Moral reflection should proceed via two steps. First, think about what outcomes we morally ought to prefer. Then consider what norms and ways of thinking will best serve to help bring about a future that's more rather than less morally preferable. Afterwards, put these reflections into practice. (I like effective altruism as a serious explicit effort to implement that post-reflection step. But readers should, of course, use their own judgment.)

Are there any philosophical questions that will never be resolved? Why?

It depends what you mean by "resolved". I don't expect any of the really "big" questions to ever secure universal consensus, because there are multiple internally coherent philosophical "worldviews", and rational argument proceeds by way of identifying internal inconsistencies (premises you expect your interlocutor to accept that entail a conclusion that they currently deny). This means that once they've ironed out all their internal inconsistencies, there's no way to rationally persuade them to change worldviews.

That said, I expect any philosophical question is "resolvable" in the sense that some people can come to know the truth of the matter. They just won't be able to persuade everyone else. (I imagine most philosophers often feel themselves to be in this position! Alas, there's no externally valid test to determine whether the feeling is accurate or not in any given case…)

What do you think?