An even deeper atheism

By Joe_Carlsmith @ 2024-01-11T17:28 (+25)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Matthew_Barnett @ 2024-01-11T19:36 (+12)

We can haggle about some of the details of Yudkowsky's pessimism here... but I'm sympathetic to the broad vibe: if roughly all the power is held by agents entirely indifferent to your welfare/preferences, it seems unsurprising if you end up getting treated poorly. Indeed, a lot of the alignment problem comes down to this.

I agree with the weak claim that if literally every powerful entity in the world is entirely indifferent to my welfare, it is unsurprising if I am treated poorly. But I suspect there's a stronger claim underneath this thesis that seems more relevant to the debate, and also substantially false.

The stronger claim is: adding powerful entities to the world who don't share our values is selfishly bad, and the more of such entities we add to the world, the worse our situation becomes (according to our selfish values). We know this stronger claim is likely false because—assuming we accept the deeper atheism claim that humans have non-overlapping utility functions—the claim would imply that ordinary population growth is selfishly bad. Think about it: by permitting ordinary population growth, we are filling the universe with entities who don't share our values. Population growth, in other words, causes our relative power in the world to decline.

Yet, I think a sensible interpretation is that ordinary population growth is not bad on these grounds. I doubt it is better, selfishly, for the Earth to have 800 million people compared to 8 billion people, even though I would have greater relative power in the first world compared to the second. [ETA: see this comment for why I think population growth seems selfishly good on current margins.]

Similarly, I doubt it is better, selfishly, for the Earth to have 8 billion humans compared to 80 billion human-level agents, 90% of which are AIs. Likewise, I'm skeptical that it is worse for my values if there are 8 billion slightly-smarter-than human AIs who are individually, on average, 9 times more powerful than humans, living alongside 8 billion humans.

(This is all with the caveat that the details here matter a lot. If, for example, these AIs have a strong propensity to be warlike, or aren't integrated into our culture, or otherwise form a natural coalition against humans, it could very well end poorly for me.)

If our argument for the inherent danger of AI applies equally to ordinary population growth, I think something has gone wrong in our argument, and we should probably reject it, or at least revise it.

Vasco Grilo @ 2024-01-15T11:38 (+2)

Nice post, Joe!

"I reject Yudkowsky's story that some particular AI will foom and become dictator-of-the-future; rather, I think there will be a multi-polar ecosystem of different AIs with different values. Thus: problem solved?" Well, hmm: what values in particular? Is it all still ultimately an office-supplies thing? If so, it depends how much you like a complex ecosystem of staple-maximizers, thumb-tack-maximizers, and so on – fighting, trading, etc. "Better than a monoculture." Maybe, but how much?[9] Also, are all the humans still dead?

In my mind, there is a sense in which this last question is analogous to Neanderthals[1] asking a few hundreds of thousands of years ago whether they would still be around now. They are not, but is this any significant evidence that the world has gone through a much less valuable trajectory? I do not think so. What arguably matters is whether there are still beings around with the desire and ability to increase welfare. So I would instead ask, "are all intelligent welfarists dead?", where intelligent could be interpreted as sufficiently intelligent to eventually leverage (via successors or not) the cosmic endowment to increase welfare. My question is equivalent to yours nearterm, since humans are the only intelligent welfarists now, but the answers may come apart in the next few decades thanks to (even more) intelligent sentient AI. To the extent the answers to the 2 questions differ, it seems important to focus on the right one.

  1. ^

    Or individuals of another species of the genus Homo. There are 12 besides Homo Sapiens!