How can we argue against prioritizing creation of intelligent digital beings?
By Zeren @ 2025-07-31T12:09 (+2)
Increasing total utility will be more efficient by the way of creating digital intelligent beings in comparison to sustaining species of biological sentient beings. How can we argue against extinction of humanity and other sentient species, based on totality views of population ethics?
Noah Birnbaum @ 2025-07-31T14:18 (+5)
A few things come to mind:
- It's not clear that their lives are going to be positive (or that they'll have experiences at all), so you can argue on that front. It seems more clear in the human case because of trends + technology and growth.
- You probably shouldn't be super certain of moral theories that lead you to this like Utilitarianism, and you probably want to act robustly against multiple moral theories. Doing something that is bad on most theories and good on one or two (even if they individually are your most confident theories) seems somewhat naive.
- Perhaps an ethical theory saying humans should go extinct is just a good reason to reject a theory.
My personal view is that if you are a totalist you probably have to accept something like this argument in the limit, though.
Astelle Kay @ 2025-08-04T03:41 (+2)
Thanks for raising this, Zeren!
One way I’d push back is with a more human-centered lens: even if digital minds could vastly increase total utility, does that mean we should rush to replace ourselves?
There’s a difference between creating value and preserving something irreplaceable, like embodied experience, emotional depth, culture, and human vulnerability. If a moral theory says we should phase out humanity in favor of scalable minds, maybe that’s not a reason to obey it; it’s a reason to question its framing.
Some things have value beyond aggregation.
-Astelle