Moral Obligation and Moral Opportunity

By Alice Blair @ 2025-05-14T21:05 (+4)

This is a linkpost to https://www.lesswrong.com/posts/AmwgNS3ybwF8Eovho/moral-obligation-and-moral-opportunity

This concept and terminology was spawned out of a conversation a few years ago with my friend Skyler. I finally decided to write it up. Any mistakes here are my own.


Every once in a while, I find myself at yet another rationalist/EA/whatever-adjacent social event. Invariably, someone walks up to me:

Hi, I'm Bob. I'm pretty new here. What do you work on?

Hi Bob, I'm Alice. I work on preventing human extinction from advanced AI. If you'd like, I'm happy to talk a bit more about why I think this is important.

Bob is visibly nervous.

Yeah, I've already gotten the basic pitch on AI safety, it seems like it makes sense. People here all seem to work on such important things compared to me. I feel a sort of moral obligation to help out, but I feel stressed out about it and don't know where to start.

Alice is visibly puzzled, then Bob is puzzled that Alice is puzzled.

I'm not sure I understand this "moral obligation" thing? Nobody's forcing me to work on AI safety, it's just what I decided to do. I could've chosen to work on music or programming or a million other things instead. Can you explain what you mean without using the word "obligation"?

Well, things like "I'm going to save the world from extinction from AI" or "I'm going to solve the suffering of billions of farmed animals" are really big and seem pretty clearly morally right to do. I'm not doing any of those things, but I feel an oblig- hmm... I feel like something's wrong with me if I don't work on these issues.

I do not think anything is necessarily wrong with you if you don't work on AI safety or any other of these EA causes. Don't get me wrong, I think they're really important and I love when there are more people helping out. I just use a very different frame to look at this whole thing.

Before running into any of these ideas, I was going about my life, picking up all sorts of opportunities as I went: there's $5 on the ground? I pick it up. There's a cool conference next month? I apply. So when I heard that I plausibly lived in a world where humans go extinct from AI, I figured that owning up to it doesn't make it worse, and I looked at my opportunities. I get the chance to learn from a bunch of smart people to try and save the world? Of course I take the chance, that sounds so cool.

My point here is that you're socially and emotionally allowed to not take that opportunity, just like you're allowed to not pick up $5 or not apply to conferences. I think it's probably good for people to pick up $5 when they see it or help out with AI safety if they can, but it's their opportunity to accept or decline.

This feels like approximately the same thing as before? Under the moral obligation frame, people look at me negatively if I don't do the Super Highly Moral thing, and under the moral opportunity frame you tell me I have a choice but only look at me positively if I do the Super Highly Moral thing? Isn't this just the same sort of social pressure, but you say something about respecting personal agency?

Well, I'm not actually that judgmental, I'll look at you pretty positively unless you do something Definitely Wrong. But that's not the point. The point is that these two framings make a huge emotional difference when used as norms for a personal or group culture. Positive reinforcement is more effective than positive punishment because it tells someone exactly what to do instead of just what not to do. Reinforcement is also just a more emotionally pleasant stimulus, which goes a long way.

Let's look at this a different way: say that my friend Carol likes to watch TV and play video games and not much else. The moral obligation frame looks at Carol and finds her clearly in the moral wrong, lounging around while there are important things to be doing. The moral opportunity frame looks at her and sees a person doing her own things in a way that doesn't hurt other people, and that's morally okay.

These two frames still seem weirdly similar, like in the "moral opportunity" frame you just shifted all options to be a bit more morally good so that everything becomes okay. But ultimately both frames still think working on saving the world is better than watching TV. I see what you're saying about emotions, but this still feels like some trick is being played on my sense of morality.

That's a reasonable suspicion. I think the math of this sort of shifting works out, I really don't think there's any trick here. Ultimately it's your choice how you want to interface with your emotions. I find that people are much more likely to throw their mind away when faced with something big and scary that feels like an obligation, compared with when they feel like an explorer with so many awesome opportunities around.

It's sad to live in a world that could use so much saving, and dealing with that is hard. There's no getting around that emotional difficulty except by ignoring the world you live in. Conditional on the world we live in, though, I'd much rather live in a culture that frames things as moral opportunities than moral obligations.


I frame this as a conversation with a newcomer, but I also see the moral obligation frame implicit in a lot of experienced EAs, especially those who are going through some EA burnout. The cultural pieces making up this post mostly already existed across the EA-sphere (and I've tried to link to them where possible), but I haven't seen them collected in this way before, nor have I seen this particular concept boundary drawn.