Will Welfareans Get to Experience the Future?

By MichaelDickens @ 2025-11-02T01:21 (+55)

Cross-posted from my website.

Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try.

If welfare is important, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species[1], then these two things are probably also true:

  1. The best possible universe isn't filled with humans or human-like beings. It's filled with some other type of being that's much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever "welfare" means. Let's call these beings Welfareans.
  2. A universe filled with Welfareans is much better than a universe filled with humanoids.

(Historically, people referred to these beings as "hedonium". I dislike that term because hedonium sounds like a thing. It doesn't sound like something that matters. It's supposed to be the opposite of that—it's supposed to be the most profoundly innately valuable sentient being. So I think it's better to describe the beings as Welfareans. I suppose we could also call them Hedoneans, but I don't want to constrain myself to hedonistic utilitarianism.)

Even in the "Good Ending" where we solve AI alignment and governance and coordination problems and we end up with a superintelligent AI that builds a flourishing post-scarcity civilization, will there be Welfareans? In that world, humans will be able to create a flourishing future for themselves; but beings who don't exist yet won't be able to give themselves good lives, because they don't exist.

My guess is that a tiny subset of crazy people (like me) will spend their resources making Welfareans, who will end up occupying only a tiny percentage of the accessible universe, and as a result, the future will be less than 1% as good as it could have been.

(And maybe my conception of Welfareans will be wrong, and some other weirdo will be the one who makes the real Welfareans.)

I want the future to be nice for humans, too. (I'm a human.) But all we need to do is solve AI alignment (and various other extremely difficult, seemingly-insurmountable problems), and humans will turn out fine. Welfareans can't advocate for themselves, and I'm afraid they won't get the advocates they need.

There is one reason why Welfareans might inherit most of the universe. Generally speaking, people don't care about filling all available space with Dyson spheres to maximize population. They just want to live in their little corner of space, and they'd be happy to let the Welfareans have the rest.

It's probably true that most people aren't maximizers. But some people are maximizers, and most of them won't want to maximize Welfareans; they'll want to maximize some other thing. A lot of people will want to maximize how much of the universe is captured by humans or post-humans (or even just their personal genetic lineage). Mormons will want to maximize the number of Mormons or something. There are enough maximizing ideologies that I expect Welfareans to get squeezed out.

So what can we do for the Welfareans?

There are two problems:

  1. Who even are the Welfareans?
  2. How do we ensure that the Welfareans get their share of the future's resources?

Solving problem #1 approximately requires solving ethics (or, I guess, axiology). I'm not going to say more about that problem; I hope we can agree that it's hard.

For problem #2, the first answer that comes to mind is "make a power grab for as many resources as possible so I can give them to Welfareans later on". But I'm guessing that if we solve ethics (as per problem #1), The Solution To Ethics will include a bit that says something along the lines of "don't take other people's stuff". And there are only like three of us who would even care about Welfareans, so I don't think we'd get very far anyway.

So how do we increase Welfareans' share of resources, but in an ethical manner? I don't know. I'm going to start with "write this essay about Welfarean welfare".


  1. In my first draft, the opening sentence said "If something like utilitarianism is true, ...". But this is an unnecessarily strong premise. You don't need utilitarianism, you just need linear aggregation + antispeciesism. A non-consequentialist can still believe that more welfare is better (all else equal). Such a person would still want to maximize the aggregate welfare of the universe, subject to staying within the bounds of whatever moral rules they believe in. ↩︎


MHR🔸 @ 2025-11-02T16:48 (+5)

My perspective on this (or more generally on the question of whether the future is likely to involve realizing a large fraction of the possible value it could have, whatever form it turns out "value" takes) is perhaps a bit more hopeful. In my view, the question only makes sense if we are moral realists. If there are no objective facts about morality, then I don't see why we should care whether our own preferences or someone else's win out. Furthermore, I think worrying about these questions is probably pointless unless two other things are true: that we have some way of discovering moral facts and that those discoveries have some way of influencing our actions. Unless those two are somehow true, we have no reason to think our efforts can in expectation increase the amount of value realized in the world. 

So far this is a somewhat pessimistic take, but I'm optimistic in a world where all three of these conditions are true, which in some sense is (IMO) the only world where this conversation makes any sense to have. In that world, we should expect that increasing the amount of things like intelligence, time to devote to research/reflection, and focus on studying moral questions in expectation leads toward getting closer to the true morality. Welfareans (or more broadly whatever target will produce the most true value) may indeed get enough advocates just by virtue of society making more progress on these moral questions. As an example, society today includes lots of advocates for groups like women, LGBT people, people of color etc., when historically the only advocates were a "tiny subset of crazy people." But of course moral progress is at best an extremely messy and incremental - factory-farmed animals are the victims of lots of people being either indifferent or wanting to maximize something other than welfare (profit, tasty food for humans etc.), and the impact of animal advocates has not been sufficient to prevent a massive explosion of suffering. 

Still, on net I lean towards thinking that given the opportunity for study and reflection (and given the three conditions described above), we can be optimistic that we will drive toward the things that matter. Therefore, focusing on the efforts to prevent existential catastrophe or value lock-in may be among the best things we could do to ensure that we're not leaving a huge fraction of the possible value of the future on the table. That may be easier said than done, since preventing value lock-in in practice means preventing people with maximizing ideologies from successfully carrying out that maximization at least for some period of time. But that makes me hopeful that existing EA efforts may not be too far off the mark. 

Davidmanheim @ 2025-11-03T11:31 (+2)

if the value of welfare scales something-like-linearly


I think this is a critically underappreciated crux! Even accepting the other parts, it's far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..

MichaelDickens @ 2025-11-04T00:23 (+2)

simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..

I disagree but I don't think this is really a crux. The ideal future could involve filling the universe with beings who have extremely good experiences compared to humans (and do not resemble humans at all) but their experiences are still very diverse.

And, this is sort of an unanswered question about how qualia work, but my guess is that for combinatoric reasons, you could fill the accessible universe with (say) 10^40 beings who all have different experiences where the worst experience out of all of them is only a bit worse than the best.

AnonymousTurtle @ 2025-11-03T22:57 (+2)

simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once

 

My sense is that most people in EA working on these topics disagree.

Matrice Jacobine @ 2025-11-03T13:26 (+1)

I think there are pretty good reasons to expect any reasonable axiology to be additive.

Dylan Richardson @ 2025-11-04T00:33 (+1)

I take your point about "Welfareans" vs hedonium as beings rather than things, perhaps that would improve consensus building on this. 

That being said, I don't really expect whatever these entities are to be anything like what we are accustomed to calling persons. A big part of this is that I don't see any reason for experiences to be changing over time; they wouldn't need to be aging or learning or growing satiated or accustomed. 

Perhaps this is just my hedonist bias coming through -  certainly there's room for compromise. But unfortunately my experience is that lots of people are strongly compelled by experience machine arguments and are unwilling to make the slightest concession to the hedonist position. 

Changed my mind, I like this. I'm going to call them Welfareans from now on.

Jesper 🔸 @ 2025-11-03T12:16 (+1)

Your Welfareans sound a lot like Nozick's utility monster to me. Do you agree to that comparison?

AnonymousTurtle @ 2025-11-03T22:53 (+2)

See also https://www.alignmentforum.org/w/super-beneficiaries which seems really similar to this post