Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
By Jan_Kulveit @ 2025-01-30T17:07 (+36)
This is a linkpost to https://gradual-disempowerment.ai/
This is a crosspost, probably from LessWrong. Try viewing it there.
nullMatthew_Barnett @ 2025-01-30T21:48 (+10)
Do you have any thoughts on the argument I recently gave that gradual and peaceful human disempowerment could be a good thing from an impartial ethical perspective?
Historically, it is common for groups to decline in relative power as a downstream consequence of economic growth and technological progress. As a chief example, the aristocracy declined in influence as a consequence of the industrial revolution. Yet this transformation is generally not considered a bad thing for two reasons. Firstly, since the world is not zero sum, individual aristocrats did not necessarily experience declining well-being despite the relative disempowerment of their class as a whole. Secondly, the world does not merely consist of aristocrats, but rather contains a multitude of moral patients whose agency deserves respect from the perspective of an impartial utilitarian. Specifically, non-aristocrats were largely made better off in light of industrial developments.
Applying this analogy to the present situation with AI, my argument is that even if AIs pursue separate goals from humans and increase in relative power over time, they will not necessarily make individual humans worse off, since the world is not zero sum. In other words, there is ample opportunity for peaceful and mutually beneficial trade with AIs that do not share our utility functions, which would make both humans and AIs better off. Moreover, AIs themselves may be moral patients whose agency should be given consideration. Just as most of us think it is good that human children are allowed to grow, develop into independent people, and pursue their own goals—as long as this is done peacefully and lawfully—agentic AIs should be allowed to do the same. There seems to be a credible possibility of a flourishing AI civilization in the future, even if humans are relatively disempowered, and this outcome could be worth pushing for.
From a preference utilitarian perspective, it is quite unclear that we should prioritize human welfare at all costs. The boundary between biological minds and silicon-based minds seems quite arbitrary from an impartial point of view, making it a fragile foundation for developing policy. There are much more plausible moral boundaries—such as the distinction between sentient minds and non-sentient minds—which do not cut cleanly between humans and AIs. Therefore, framing the discussion solely in terms of human disempowerment seems like a mistake to me.
Ian Turner @ 2025-01-30T22:23 (+4)
there is ample opportunity for peaceful and mutually beneficial trade with AIs that do not share our utility functions
What would humans have to offer AIs for trade in this scenario, where there are "more competitive machine alternatives to humans in almost all societal functions"?
as long as this is done peacefully and lawfully
What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.
Matthew_Barnett @ 2025-01-31T07:56 (+3)
What would humans have to offer AIs for trade in this scenario, where there are "more competitive machine alternatives to humans in almost all societal functions"?
In a lawful regime, humans would have the legal right to own property beyond just their own labor. This means they could possess assets—such as land, businesses, or financial investments—that they could trade with AIs in exchange for goods or services. This principle is similar to how retirees today can sustain themselves comfortably without working. Instead of relying on wages from labor, they live off savings, government welfare, or investments. Likewise, in a future where AIs play a dominant economic role, humans could maintain their well-being by leveraging their legally protected ownership of valuable assets.
What do these words even mean in an ASI context? If humans are relatively disempowered, this would also presumably extend to the use of force and legal contexts.
In the scenario I described, humanity's protection would be ensured through legal mechanisms designed to safeguard individual human autonomy and well-being, even in a world where AIs collectively surpass human capabilities. These legal structures could establish clear protections for humans, ensuring that their rights, freedoms, and control over their own property remain intact despite the overwhelming combined power of AI systems.
This concept is genuinely not unusual or unprecedented. Consider your current situation as an individual in society. Compared to the collective power of all other humans combined, you are extremely weak. If the rest of the world suddenly decided to harm you, they could easily overpower you—killing you or taking your possessions with little effort.
Yet, in practice, you likely do not live in constant fear of this possibility. The primary reason is that, despite being vastly outmatched in raw power, you are integrated into a legal and social framework that protects your rights. Society as a whole coordinates to maintain legal structures that safeguard individuals like you from harm. For instance, if you live in the United States, you are entitled to due process under the law, and you are protected from crimes like murder and theft by legal statutes that are actively enforced.
Similarly, even if AI systems collectively become more powerful than humans, they could be governed by collective legal mechanisms that ensure human safety and autonomy, just as current legal systems protect individuals from the vastly greater power of society-in-general.
Ian Turner @ 2025-02-05T22:41 (+4)
I don't understand how you think these legal mechanisms would actually serve to bind superintelligent AIs. Or to put it another way, could chimpanzees or dolphins have established a legal mechanism that would have prevented human incursion into their habitat? If not, how is this hypothetical situation different?
Regarding the idea of trade — doesn't this basically assume that humans will get a return on capital that is at least as good as the AIs' return on capital? If not, wouldn't the AIs eventually end up owning all the capital? And wouldn't we expect superintelligent AIs to be better than humans at managing capital?