Prioritization Questions for Artificial Sentience

By Jamie_Harris @ 2021-10-18T14:07 (+30)

This is a linkpost to https://www.sentienceinstitute.org/blog/prioritization-questions-for-artificial-sentience

Many thanks to Janet Pauketat, Ali Ladak, Jacy Reese Anthis, Robert Long, Leonie Kößler, and Michael Aird for reviewing and providing feedback.

INTRODUCTION

Work to protect the interests of artificial sentience (AS advocacy[1]) could be very important. It could improve the lives of vast numbers of future beings and be among the most cost-effective actions we could possibly take at the present time to help others.[2] But this is uncertain: it is subject to many “crucial considerations.” Bostrom (2014a) defined a crucial consideration:

A consideration such that if it were taken into account it would overturn the conclusions we would otherwise reach about how we should direct our efforts, or an idea or argument that might possibly reveal the need not just for some minor course adjustment in our practical endeavors but a major change of direction or priority.[3]

This blog post lists possible crucial considerations affecting whether AS advocacy seems likely to be positive on balance, as well as lesser questions affecting how we might prioritize AS relative to other promising cause areas. We include questions that affect broader categories of priorities and intervention types that include at least portions of AS advocacy: longtermism, suffering risks, and moral circle expansion.

For simplicity, the questions are phrased in binaries (e.g. Will X happen?, Can we do X?) but are best answered in terms of probabilities (How likely is it that X will happen?) and degrees (To what extent can we do X?).

We include references for where certain questions have been explored in more depth. The focus here is not on providing answers, though a key goal of Sentience Institute’s past and ongoing research is to shed light on the formulation and likely answers to these questions.

Some of these questions were also discussed on our podcast episode with Tobias Baumann.

SHOULD WE ACCEPT LONGTERMISM?

Longtermism has been defined by the Global Priorities Institute (2020) as, ““the view that the primary determinant of the differences in social value among actions and policies available today is the effect of those actions on the very long-term future.” Rejecting this claim could lead to a rejection of AS as a priority for the time being because, if it ever comes into existence, most artificial sentience will presumably exist in the very long-term future.[4]

Considerations affecting whether we accept or reject longtermism include:

ASSUMING LONGTERMISM, SHOULD WE FOCUS ON REDUCING SUFFERING RISKS?

If we accept longtermism, then there are many cause areas we could focus on. One cluster focuses on reducing risks of astronomical suffering in the long-term future (“s-risks”). AS advocacy seems most promising as a method of reducing s-risks, so questions that affect the promise of this broader category also affect AS.[5] Baumann (2020) summarizes relevant crucial considerations (and provides references for relevant reading). In brief, these are:

ASSUMING LONGTERMISM, SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?

AS advocacy is a form of moral circle expansion (MCE), an increase in the number of beings given moral consideration, such as legal protection.[7] Therefore, questions that affect the prioritization of MCE could also affect the prioritization of AS.

Baumann (2017) has explored arguments for and against “moral advocacy” or “values spreading,” which includes but is not limited to MCE.[8] Relevant crucial considerations implied by that post include:

Our forthcoming literature review on the causes of public opinion change provides empirical evidence on several of these questions.

 

Anthis (2018) and the commenters on that post raise a number of further crucial considerations relevant to values spreading in general:

 

Anthis (2018) and commenters also raise crucial considerations specific to MCE, some of which have been evaluated in Harris and Anthis (2019):

 

Other crucial considerations, not raised in the above posts, include:

ASSUMING LONGTERMISM AND A FOCUS ON MORAL CIRCLE EXPANSION, SHOULD WE FOCUS ON ARTIFICIAL SENTIENCE?

The questions listed above all affect whether we should focus on AS,[10] though there are many other questions specific to AS.

 

Crucial considerations relating to the scale of the problem addressed by AS advocacy include:

 

Crucial considerations relating to whether the problem is neglected and therefore whether there are likely to be low-hanging fruit for positive impact include:

 

Crucial considerations relating to how solvable the problem is include:

FOOTNOTES

[1] AS advocacy could involve requests similar to other social movements focused on moral circle expansion, such as demanding legal safeguards to protect against the exploitation of artificial sentience for labor, entertainment, or scientific research. Less ambitious goals could include encouraging attitude change, asking for symbolic commitments, or supporting relevant research.

[2] A more developed movement around AS advocacy might cost similar amounts to other social movements focused on moral circle expansion. For comparison, Harris (2021) notes that, “it seems likely that the total resources dedicated to Fair Trade nonprofits each year does not currently substantially exceed $100 million,” and Bollard’s (2020) “best estimate” of the spending by nonprofits in the farmed animal movement is $165 million per year. The case for AS advocacy potentially being especially cost-effective mostly rests on potential vast positive effects, although unusually low costs also seem plausible, e.g. if the number of potential “asks” is more limited than in other movements.

[3] Bostrom (2014a) added that, “[w]ithin a utilitarian context, one can perhaps try to explicate it as follows: a crucial consideration is a consideration that radically changes the expected value of pursuing some high-level subgoal. The idea here is that you have some evaluation standard that is fixed, and you form some overall plan to achieve some high-level subgoal. This is your idea of how to maximize this evaluation standard. A crucial consideration, then, would be a consideration that radically changes the expected value of achieving this subgoal.”

[4] If you think artificial sentience will be developed in large numbers soon, there might still be a neartermist case for prioritizing AS, similar to the neartermist case for prioritizing animal advocacy.

[5] Generalized moral circle expansion (MCE) may have value beyond reducing s-risks since an insufficiently broad moral circle might lead to a suboptimal allocation of resources even if all sentient beings experience net positive lives (or suffering is only mild/rare). However, MCE might only be comparably cost-effective to other longtermist cause areas and interventions if the arguments underlying a focus on suffering risks also hold.

[6] Of course, these questions could be broken down into many smaller sub-questions. For example, Anthis (2018) lists the following questions:

 

Aird (2020) lists a number of other sub-questions.

[7] Or to increase the extent to which those beings are given moral consideration, without necessarily affecting total numbers.

[8] The answers to these questions may depend on the specific values being encouraged (e.g. MCE vs. suffering-focused ethics, concern for farmed animals vs. artificial sentience) and the specific manner in which they are encouraged (e.g. technical academic papers vs. confrontational protests).

[9] For example, if MCE successfully influences all powerful future agents, then this may reduce the risk of large-scale suffering through blackmail and threats, since all the agents will be more averse to such outcomes. However, it could also incentivize the use of blackmail by making it a more potent weapon. If it influences some powerful agents but not others, this could increase imbalances in values, incentivizing some agents to use blackmail (and disincentivizing others). However, MCE might also decrease imbalances between powerful agents.

[10] Many of the questions that apply to values spreading or MCE might apply to AS but have answers that differ from the norm for those broader categories. For example, we might find that attitude changes relating to artificial sentience are more likely to last into the long term because persuaded individuals are less likely to hear opposing arguments on this topic than on topics like animal rights.


Newt @ 2021-10-18T23:42 (+9)

This is a great post.  

Personally, I am most interested in this topic: 

AS advocacy is a form of moral circle expansion (MCE), an increase in the number of beings given moral consideration, such as legal protection.[7] Therefore, questions that affect the prioritization of MCE could also affect the prioritization of AS.

Personally, I believe that until we have moral acceptance and genuine legal protections for sentient non-human animals, there will be significant barriers for AS.   I also believe that adding non-human animals to the moral circle can and should be a near-term achievement. 

Jim Buhler @ 2021-10-22T12:38 (+8)

Thanks for writing this Jamie!

Concerning the "SHOULD WE FOCUS ON MORAL CIRCLE EXPANSION?"  question, I think something like the following sub-question is also relevant: Will MCE lead to a "near miss" of the values we want to spread? 

Magnus Vinding (2018) argues that someone who cares about a given sentient being, is absolutely not guaranteed to wish what we think is the best for this sentient being. While he argues from a suffering-focused perspective, the problem is still the same from any ethical framework. 
For instance, future people who "care" about wild animals and AS, will likely care about things that have nothing to do with their subjective experiences (e.g., their "freedom" or their "right to life"), which might lead them to do things that are arguably bad (e.g., creating a lot of faithful simulations of the Amazon rainforest), although well intentioned. 
Even in a scenario where most people genuinely care about the welfare of non-humans, their standards to consider such welfare positive might be incredibly low.