Will Welfareans Get to Experience the Future?

By MichaelDickens @ 2025-11-02T01:21 (+65)

Cross-posted from my website.

Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try.

If welfare is important, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species[1], then these two things are probably also true:

  1. The best possible universe isn't filled with humans or human-like beings. It's filled with some other type of being that's much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever "welfare" means. Let's call these beings Welfareans.
  2. A universe filled with Welfareans is much better than a universe filled with humanoids.

(Historically, people referred to these beings as "hedonium". I dislike that term because hedonium sounds like a thing. It doesn't sound like something that matters. It's supposed to be the opposite of that—it's supposed to be the most profoundly innately valuable sentient being. So I think it's better to describe the beings as Welfareans. I suppose we could also call them Hedoneans, but I don't want to constrain myself to hedonistic utilitarianism.)

Even in the "Good Ending" where we solve AI alignment and governance and coordination problems and we end up with a superintelligent AI that builds a flourishing post-scarcity civilization, will there be Welfareans? In that world, humans will be able to create a flourishing future for themselves; but beings who don't exist yet won't be able to give themselves good lives, because they don't exist.

My guess is that a tiny subset of crazy people (like me) will spend their resources making Welfareans, who will end up occupying only a tiny percentage of the accessible universe, and as a result, the future will be less than 1% as good as it could have been.

(And maybe my conception of Welfareans will be wrong, and some other weirdo will be the one who makes the real Welfareans.)

I want the future to be nice for humans, too. (I'm a human.) But all we need to do is solve AI alignment (and various other extremely difficult, seemingly-insurmountable problems), and humans will turn out fine. Welfareans can't advocate for themselves, and I'm afraid they won't get the advocates they need.

There is one reason why Welfareans might inherit most of the universe. Generally speaking, people don't care about filling all available space with Dyson spheres to maximize population. They just want to live in their little corner of space, and they'd be happy to let the Welfareans have the rest.

It's probably true that most people aren't maximizers. But some people are maximizers, and most of them won't want to maximize Welfareans; they'll want to maximize some other thing. A lot of people will want to maximize how much of the universe is captured by humans or post-humans (or even just their personal genetic lineage). Mormons will want to maximize the number of Mormons or something. There are enough maximizing ideologies that I expect Welfareans to get squeezed out.

So what can we do for the Welfareans?

There are two problems:

  1. Who even are the Welfareans?
  2. How do we ensure that the Welfareans get their share of the future's resources?

Solving problem #1 approximately requires solving ethics (or, I guess, axiology). I'm not going to say more about that problem; I hope we can agree that it's hard.

For problem #2, the first answer that comes to mind is "make a power grab for as many resources as possible so I can give them to Welfareans later on". But I'm guessing that if we solve ethics (as per problem #1), The Solution To Ethics will include a bit that says something along the lines of "don't take other people's stuff". And there are only like three of us who would even care about Welfareans, so I don't think we'd get very far anyway.

So how do we increase Welfareans' share of resources, but in an ethical manner? I don't know. I'm going to start with "write this essay about Welfarean welfare".


  1. In my first draft, the opening sentence said "If something like utilitarianism is true, ...". But this is an unnecessarily strong premise. You don't need utilitarianism, you just need linear aggregation + antispeciesism. A non-consequentialist can still believe that more welfare is better (all else equal). Such a person would still want to maximize the aggregate welfare of the universe, subject to staying within the bounds of whatever moral rules they believe in. ↩︎


MHR🔸 @ 2025-11-02T16:48 (+5)

My perspective on this (or more generally on the question of whether the future is likely to involve realizing a large fraction of the possible value it could have, whatever form it turns out "value" takes) is perhaps a bit more hopeful. In my view, the question only makes sense if we are moral realists. If there are no objective facts about morality, then I don't see why we should care whether our own preferences or someone else's win out. Furthermore, I think worrying about these questions is probably pointless unless two other things are true: that we have some way of discovering moral facts and that those discoveries have some way of influencing our actions. Unless those two are somehow true, we have no reason to think our efforts can in expectation increase the amount of value realized in the world. 

So far this is a somewhat pessimistic take, but I'm optimistic in a world where all three of these conditions are true, which in some sense is (IMO) the only world where this conversation makes any sense to have. In that world, we should expect that increasing the amount of things like intelligence, time to devote to research/reflection, and focus on studying moral questions in expectation leads toward getting closer to the true morality. Welfareans (or more broadly whatever target will produce the most true value) may indeed get enough advocates just by virtue of society making more progress on these moral questions. As an example, society today includes lots of advocates for groups like women, LGBT people, people of color etc., when historically the only advocates were a "tiny subset of crazy people." But of course moral progress is at best an extremely messy and incremental - factory-farmed animals are the victims of lots of people being either indifferent or wanting to maximize something other than welfare (profit, tasty food for humans etc.), and the impact of animal advocates has not been sufficient to prevent a massive explosion of suffering. 

Still, on net I lean towards thinking that given the opportunity for study and reflection (and given the three conditions described above), we can be optimistic that we will drive toward the things that matter. Therefore, focusing on the efforts to prevent existential catastrophe or value lock-in may be among the best things we could do to ensure that we're not leaving a huge fraction of the possible value of the future on the table. That may be easier said than done, since preventing value lock-in in practice means preventing people with maximizing ideologies from successfully carrying out that maximization at least for some period of time. But that makes me hopeful that existing EA efforts may not be too far off the mark. 

Jordan Arel @ 2025-11-06T17:28 (+4)

I really enjoyed this! Very important crux for how well the future goes. You may be interested to know that Nick Bostrom talks about this, he calls them super-beneficiaries.

Davidmanheim @ 2025-11-03T11:31 (+2)

if the value of welfare scales something-like-linearly


I think this is a critically underappreciated crux! Even accepting the other parts, it's far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..

MichaelDickens @ 2025-11-04T00:23 (+3)

simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..

I disagree but I don't think this is really a crux. The ideal future could involve filling the universe with beings who have extremely good experiences compared to humans (and do not resemble humans at all) but their experiences are still very diverse.

And, this is sort of an unanswered question about how qualia work, but my guess is that for combinatoric reasons, you could fill the accessible universe with (say) 10^40 beings who all have different experiences where the worst experience out of all of them is only a bit worse than the best.

Davidmanheim @ 2025-11-10T07:46 (+2)

That's a fair point, and I agree that it leads to a very different universe.

At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.

AnonymousTurtle @ 2025-11-03T22:57 (+2)

simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once

 

My sense is that most people in EA working on these topics disagree.

Davidmanheim @ 2025-11-10T07:47 (+2)

I don't think that's at all obvious, though it could be true.

AnonymousTurtle @ 2025-11-11T09:24 (+2)

I agree with you, as do most people outside of EA, but I believe almost everyone in EA working on these topics disagrees

Davidmanheim @ 2025-11-11T11:48 (+2)

I meant that I don't think it's obvious that most people in EA working on this would agree. 

I do think it's obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It's even very unclear how to count person-experiences overall, as Johnston's Personite paper argues: https://www.jstor.org/stable/26631215 and I'll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.

AnonymousTurtle @ 2025-11-11T16:01 (+2)

Interesting. Could you point to anyone in EA who does not agree with the additive view and works in this field?

tobycrisford 🔸 @ 2025-11-05T07:12 (+1)

It sounds like MichaelDickens' reply is probably right, that we don't need to consider identical experiences in order for this argument to go through.

But the question of whether identical copies of the same experience have any additional value is a really interesting one. I used to feel very confident that they have no value at all. I'm now a lot more uncertain, after realising that this view seems to be in tension with the many worlds interpretation of quantum mechanics: https://www.lesswrong.com/posts/bzSfwMmuexfyrGR6o/the-ethics-of-copying-conscious-states-and-the-many-worlds 

Davidmanheim @ 2025-11-10T08:00 (+2)

I recently discussed this on twitter with @Jessica_Taylor, and think that there's a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n - which is a completely different claim!

I think the many worlds interpretation confuses this by making it about causally separated  beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that's explicitly ignoring the question. (As a reducto, if we asked "Is 1 the same as 1" the answer is yes, they are identical platonic numbers, but if we instead ask "is 1 the same as 1 plus 1" the answer is no, they are different because the second is... different, by assumption!)

Matrice Jacobine @ 2025-11-03T13:26 (+1)

I think there are pretty good reasons to expect any reasonable axiology to be additive.

Davidmanheim @ 2025-11-10T08:07 (+2)

I need to write a far longer response to that paper, but I'll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.

Brody McManus @ 2025-11-07T00:30 (+1)

I haven't thought too deeply about it, and it would be a convenient outcome which makes me skeptical, but it seems plausible to me that as technology improves (e.g. gene editing, brain emulation), humans would try to apply it to become more like Welfarians. At least personally, if there was some relatively safe way to change my neurochemistry to derive x% more pleasure from pleasurable experiences, all else equal, I'm confident I'd seriously consider the option. 

I would assume the default case is that the tech would improve total hedonistic or preference satisfaction value for any given individual. Therefore my statement only holds in the cases where credence is given to these value types. Even if moral realism is true, and hedonism/PF were the only contributing values, I'd still guess some deliberate steering would be necessary to reach Welfarian status and not fall short and only be considered a... Hufarian maybe?

Jeffrey Kursonis @ 2025-11-05T15:09 (+1)

I hope you have a billion years to evolve your Welfareans. Since humans already have that behind us and our evolution has been going pretty good and faster in the last 200 years maybe in a hundred or so we’ll be getting pretty close to your imagination of Welfareans. 

My thesis is that the core driving bit of humans is love, but that most of what love is, has not fully unfolded yet, like a flower not yet fully blossomed. So we just keep pushing forward, especially on benefitting others (like EA) and the unfolding will continue. 

teatonglu @ 2025-11-05T13:41 (+1)

We should become our own version of Welfareans at our own pace living peacefully with Welfareans - towards win-win theory

Dylan Richardson @ 2025-11-04T00:33 (+1)

I take your point about "Welfareans" vs hedonium as beings rather than things, perhaps that would improve consensus building on this. 

That being said, I don't really expect whatever these entities are to be anything like what we are accustomed to calling persons. A big part of this is that I don't see any reason for experiences to be changing over time; they wouldn't need to be aging or learning or growing satiated or accustomed. 

Perhaps this is just my hedonist bias coming through -  certainly there's room for compromise. But unfortunately my experience is that lots of people are strongly compelled by experience machine arguments and are unwilling to make the slightest concession to the hedonist position. 

Changed my mind, I like this. I'm going to call them Welfareans from now on.

Jesper 🔸 @ 2025-11-03T12:16 (+1)

Your Welfareans sound a lot like Nozick's utility monster to me. Do you agree to that comparison?

AnonymousTurtle @ 2025-11-03T22:53 (+2)

See also https://www.alignmentforum.org/w/super-beneficiaries which seems really similar to this post