A crux against artificial sentience work for the long-term future
By Jordan Arel @ 2025-05-18T21:40 (+11)
A primary reason given for the importance of AI sentience/AI welfare/AI rights as a cause area is that most minds in the future will be digital, so it is essential to ensure such minds are treated well. The reasons given for this are that such minds will be far more optimal for outer space, and many orders of magnitude more minds can be run using the same amount of resources required to run a single equivalent biological mind.
I think this is quite a large mistake in reasoning, because while there may be a very small fraction of future intelligence which is doing the work of settling space and solving problems that arise, I strongly suspect that within the first few million years we will have solved basically all problems that need solving and almost all of the intelligence and computing power will go toward digital emulations for enjoyment and leisure, with further space settlement being highly optimized and needing very little further computation, or being able to be done in a way which simply avoids sentience.
So while it may be true that most minds will be digital, it also seems that whether these minds deserve rights/welfare/moral concern will be quite obvious, as those minds will be the future of humanity itself, most people having uploaded themselves into digital form for the much greater abundance and opportunities this will provide.
It seems likely we will be acutely aware of the degree to which digital intelligence used for instrumental purposes within the simulation is sentient, as we will have first-hand experience with digital sentience and be able to study it with advanced intelligence doing fine-grained interpretability and qualia research to understand exactly what generates qualia valence, preferences, etc.
It is possible that emulated humans, having all this knowledge, will still treat instrumental AI within the simulation badly, but because we will have such profound understanding of what we are doing and ability to directly empathize with and experience it ourselves, it seems much more likely that in the long-term we will find ways of avoiding or minimizing harm to artificially sentient processes.
Another reason treating instrumental AI within the simulation badly seems unlikely to be a large problem is that we will be strongly incentivized to minimize instrumental processes within the simulation so that we can devote more computational resources to emulations of leisure and enjoyment.
While it may still be true that how we treat AIs could end up being the most significant moral catastrophe in the near-term future, I don’t think it is likely to be an existential catastrophe, as it seems the percent of minds mistreated in the near-term is likely to be an infinitesimally small fraction of all future digital minds.
Note, there may be other instrumental reasons for AI sentience work, such as slowing down AI progress for AI safety reasons, but I think we should be clear if this is the reason we are doing such work. (Of course, this does seem to present a significant conflict, as being open about this alternative motive may be highly detrimental to the persuasiveness of the work.)