Prioritization Questions for Artificial Sentience
By Jamie_Harris @ 2021-10-18T14:07 (+30)
This is a linkpost to https://www.sentienceinstitute.org/blog/prioritization-questions-for-artificial-sentience
Many thanks to Janet Pauketat, Ali Ladak, Jacy Reese Anthis, Robert Long, Leonie Kößler, and Michael Aird for reviewing and providing feedback.
INTRODUCTION
Work to protect the interests of artificial sentience (AS advocacy[1]) could be very important. It could improve the lives of vast numbers of future beings and be among the most cost-effective actions we could possibly take at the present time to help others.[2] But this is uncertain: it is subject to many “crucial considerations.” Bostrom (2014a) defined a crucial consideration:
A consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.[3]
This blog post lists possible crucial considerations affecting whether AS advocacy seems likely to be positive on balance, as well as lesser questions affecting how we might prioritize AS relative to other promising cause areas. We include questions that affect broader categories of priorities and intervention types that include at least portions of AS advocacy: longtermism, suffering risks, and moral circle expansion.
For simplicity, the questions are phrased in binaries (e.g. Will X happen?, Can we do X?) but are best answered in terms of probabilities (How likely is it that X will happen?) and degrees (To what extent can we do X?).
We include references for where certain questions have been explored in more depth. The focus here is not on providing answers, though a key goal of Sentience Institute’s past and ongoing research is to shed light on the formulation and likely answers to these questions.
Some of these questions were also discussed on our podcast episode with Tobias Baumann.
SHOULD WE ACCEPT LONGTERMISM?
Longtermism has been defined by the Global Priorities Institute (2020) as, ““the view that the primary determinant of the differences in social value among actions and policies available today is the effect of those actions on the very long-term future.” Rejecting this claim could lead to a rejection of AS as a priority for the time being because, if it ever comes into existence, most artificial sentience will presumably exist in the very long-term future.[4]
Considerations affecting whether we accept or reject longtermism include:
- What obligations do we have to future beings? (Arrhenius 2000)
- When evaluating interventions, should we take into account the numerous (long-term) unintended, indirect effects or only consider the (short-term) intended, direct effects? (GPI 2020)
- Should we “adopt a zero rate of pure time preference” in evaluating interventions? (GPI 2020)
- Should we focus on actions with higher expected value but lower certainty, or lower expected value but higher certainty? (Wilkinson 2020 and Tomasik 2015)
- Can we reliably influence the long-term future? (Harris and Anthis 2019, Tarsney 2019, and Hurford 2013)
ASSUMING LONGTERMISM, SHOULD WE FOCUS ON REDUCING SUFFERING RISKS?
If we accept longtermism, then there are many cause areas we could focus on. One cluster focuses on reducing risks of astronomical suffering in the long-term future (“s-risks”). AS advocacy seems most promising as a method of reducing s-risks, so questions that affect the promise of this broader category also affect AS.[5] Baumann (2020) summarizes relevant crucial considerations (and provides references for relevant reading). In brief, these are:
- “How much moral weight [should] we give to reducing (severe, large-scale) suffering or other harms, compared to other moral goals, such as the creation of additional happy lives or the promotion of greater happiness of individuals that are already well-off”?
- How much suffering will exist in the future? How much happiness will exist? How easy or difficult will it be to affect these outcomes?[6]
- What proportion of expected future suffering lies in “worst case” scenarios?
ASSUMING LONGTERMISM, SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?
AS advocacy is a form of moral circle expansion (MCE), an increase in the number of beings given moral consideration, such as legal protection.[7] Therefore, questions that affect the prioritization of MCE could also affect the prioritization of AS.
Baumann (2017) has explored arguments for and against “moral advocacy” or “values spreading,” which includes but is not limited to MCE.[8] Relevant crucial considerations implied by that post include:
- Does advocating for a value system increase the number of people with these values?
- Do persuaded individuals take action in support of their new values, e.g. further advocacy?
- Do the actions of persuaded individuals tend to align with the intentions of the original advocates?
- Will the values that are spread be durable or will they revert to an equilibrium?
- Will the values that are spread be rendered irrelevant, e.g. by human extinction?
- Do value changes influence technological and economic developments?
- Do efforts to spread values encourage equal or greater efforts to spread opposite values?
- Do efforts to spread values encourage moral reflection or competitiveness and partisanship?
- Does values spreading have diminishing returns? I.e. can a small, convinced minority wield outsized influence?
Our forthcoming literature review on the causes of public opinion change provides empirical evidence on several of these questions.
Anthis (2018) and the commenters on that post raise a number of further crucial considerations relevant to values spreading in general:
- Will human descendants “find the correct morality (in the sense of moral realism, finding these mind-independent moral facts)” and converge towards it, e.g. during a period of long reflection?
- How tractable are reflection processes (e.g. coherent extrapolated volition) to the problem of determining optimal values, and will they be affected by current values?
- Will values be locked-in, e.g. because of developments in AI? If so, when?
- Will the values of some people, such as AI researchers, have a disproportionate influence on the far future? Can they be influenced by targeted interventions?
- Do the values that would improve the quality of the far future converge with the values that would improve the quality of the near future?
Anthis (2018) and commenters also raise crucial considerations specific to MCE, some of which have been evaluated in Harris and Anthis (2019):
- Will the moral circle expand to reach all sentient beings?
- Are economic growth and other long-term, indirect factors more important determinants of the size and shape of moral circles than direct MCE efforts?
- Do human biases incline us to work on MCE or on other cause areas?
Other crucial considerations, not raised in the above posts, include:
- Will there be competing powerful agents in the far future and what effects will MCE have on the interactions between such agents?[9] (Torges 2021)
- Do changes in attitudes towards one type of sentient being affect attitudes towards other sentient beings? (Aird 2021 and Ladak et al. 2021)
- Does successful advocacy for one type of sentient being build capacity for advocacy for other types of sentient beings or will the movement dissipate once its narrower goals are achieved?
- Will the moral circle expand too far, leading to suboptimal resource allocation? (Darling 2016)
- Is the tractability of MCE decreased if the intended beneficiaries cannot advocate for their own interests? (Baker 2020)
ASSUMING LONGTERMISM AND A FOCUS ON MORAL CIRCLE EXPANSION, SHOULD WE FOCUS ON ARTIFICIAL SENTIENCE?
The questions listed above all affect whether we should focus on AS,[10] though there are many other questions specific to AS.
Crucial considerations relating to the scale of the problem addressed by AS advocacy include:
- Are current artificial entities sentient? (Tomasik 2014a, Anthis 2018)
- Will future artificial entities be sentient? (Reggia 2013, Francken et al. 2021)
- Will artificial sentience suffer in practice? (Tomasik 2011, Sotala and Gloor 2017)
- How many sentient artificial beings will exist? What proportion of future sentient lifeforms will be artificial? What proportion of suffering will artificial sentience account for? (Tomasik 2014b, Brauner and Grosse-Holz 2018),
- How would AS advocacy affect the trajectory of academic work related to artificial sentience? E.g. would it lead to new ideas and foci or just reinforce the current ones?
- What effects would AS advocacy have on AI designers and researchers? E.g. would it polarize these communities? Would it slow down AI safety research?
- What effects would AS advocacy have on the credibility and resources of other movements with which it is associated (e.g. animal advocacy, effective altruism)?
Crucial considerations relating to whether the problem is neglected and therefore whether there are likely to be low-hanging fruit for positive impact include:
- Will an AS advocacy movement develop anyway?
- Will ongoing discussion of “robot rights” and related topics (e.g. in academia, in sci-fi) extend to include other categories of artificial sentience, such as suffering subroutines?
- Are the plausible “asks” that advocates could make meaningfully different from adjacent work that is already being done, e.g. animal advocacy, consciousness research? (Lima et al. 2020, Owe and Baum 2021)
- Will artificial sentience be autonomous, capable of rational decision-making, or possess other characteristics beyond sentience that might affect (the perception of) moral obligations towards it or its capacity to advocate for its own interests? (Hanson 2016, Campa 2016)
- Will artificial sentience be created directly by humans, or in some other way that affects (the perception of) moral obligations towards it? (Bostrom 2014b)
Crucial considerations relating to how solvable the problem is include:
- Do common biases and attitudes like speciesism, substratism, anthropomorphism, scope insensitivity, and short-termism make AS advocacy (in)tractable? (Harris and Anthis 2021)
- Can the trajectory (e.g. development, spread, and regulation) of new technologies be influenced in its early stages by thoughtful actors? (Leung 2019, Mohorčich and Reese 2019)
- Are there any “asks” that advocates could make of the institutions that they target that would benefit artificial sentience? (Faville forthcoming, Harris 2020)
- How much leverage would thoughtful, effectiveness-focused advocates have over a nascent AS advocacy movement? (Harris and Anthis 2021)
- Is there sufficient interest in this topic to secure the necessary funding for relevant work?
FOOTNOTES
[1] AS advocacy could involve requests similar to other social movements focused on moral circle expansion, such as demanding legal safeguards to protect against the exploitation of artificial sentience for labor, entertainment, or scientific research. Less ambitious goals could include encouraging attitude change, asking for symbolic commitments, or supporting relevant research.
[2] A more developed movement around AS advocacy might cost similar amounts to other social movements focused on moral circle expansion. For comparison, Harris (2021) notes that, “it seems likely that the total resources dedicated to Fair Trade nonprofits each year does not currently substantially exceed $100 million,” and Bollard’s (2020) “best estimate” of the spending by nonprofits in the farmed animal movement is $165 million per year. The case for AS advocacy potentially being especially cost-effective mostly rests on potential vast positive effects, although unusually low costs also seem plausible, e.g. if the number of potential “asks” is more limited than in other movements.
[3] Bostrom (2014a) added that, “[w]ithin a utilitarian context, one can perhaps try to explicate it as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal.”
[4] If you think artificial sentience will be developed in large numbers soon, there might still be a neartermist case for prioritizing AS, similar to the neartermist case for prioritizing animal advocacy.
[5] Generalized moral circle expansion (MCE) may have value beyond reducing s-risks since an insufficiently broad moral circle might lead to a suboptimal allocation of resources even if all sentient beings experience net positive lives (or suffering is only mild/rare). However, MCE might only be comparably cost-effective to other longtermist cause areas and interventions if the arguments underlying a focus on suffering risks also hold.
[6] Of course, these questions could be broken down into many smaller sub-questions. For example, Anthis (2018) lists the following questions:
- How likely is it that powerful beings in the far future “will use large numbers of less powerful sentient beings, such as for recreation (e.g. safaris, war games), a labor force (e.g. colonists to distant parts of the galaxy, construction workers), scientific experiments, threats, (e.g. threatening to create and torture beings that a rival cares about), revenge, justice, religion, or even pure sadism?”
- How likely is it that, “technology and efficiency will remove the need for powerless, high-suffering, instrumental moral patients?”
- How likely is it that human descendents “will optimize their resources for happiness (i.e. create hedonium) relative to optimizing for suffering (i.e. create dolorium)?”
- Will evolutionary forces continue to shape the capacities and experiences of sentient beings?
Aird (2020) lists a number of other sub-questions.
[7] Or to increase the extent to which those beings are given moral consideration, without necessarily affecting total numbers.
[8] The answers to these questions may depend on the specific values being encouraged (e.g. MCE vs. suffering-focused ethics, concern for farmed animals vs. artificial sentience) and the specific manner in which they are encouraged (e.g. technical academic papers vs. confrontational protests).
[9] For example, if MCE successfully influences all powerful future agents, then this may reduce the risk of large-scale suffering through blackmail and threats, since all the agents will be more averse to such outcomes. However, it could also incentivize the use of blackmail by making it a more potent weapon. If it influences some powerful agents but not others, this could increase imbalances in values, incentivizing some agents to use blackmail (and disincentivizing others). However, MCE might also decrease imbalances between powerful agents.
[10] Many of the questions that apply to values spreading or MCE might apply to AS but have answers that differ from the norm for those broader categories. For example, we might find that attitude changes relating to artificial sentience are more likely to last into the long term because persuaded individuals are less likely to hear opposing arguments on this topic than on topics like animal rights.
Newt @ 2021-10-18T23:42 (+9)
This is a great post.
Personally, I am most interested in this topic:
AS advocacy is a form of moral circle expansion (MCE), an increase in the number of beings given moral consideration, such as legal protection.[7] Therefore, questions that affect the prioritization of MCE could also affect the prioritization of AS.
Personally, I believe that until we have moral acceptance and genuine legal protections for sentient non-human animals, there will be significant barriers for AS. I also believe that adding non-human animals to the moral circle can and should be a near-term achievement.
Jim Buhler @ 2021-10-22T12:38 (+8)
Thanks for writing this Jamie!
Concerning the "SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?" question, I think something like the following sub-question is also relevant: Will MCE lead to a "near miss" of the values we want to spread?
Magnus Vinding (2018) argues that someone who cares about a given sentient being, is absolutely not guaranteed to wish what we think is the best for this sentient being. While he argues from a suffering-focused perspective, the problem is still the same from any ethical framework.
For instance, future people who "care" about wild animals and AS, will likely care about things that have nothing to do with their subjective experiences (e.g., their "freedom" or their "right to life"), which might lead them to do things that are arguably bad (e.g., creating a lot of faithful simulations of the Amazon rainforest), although well intentioned.
Even in a scenario where most people genuinely care about the welfare of non-humans, their standards to consider such welfare positive might be incredibly low.