Announcing AI Welfare Debate Week (July 1-7)

By Toby Tremlett🔹 @ 2024-06-18T08:06 (+84)

July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: â€œAI welfare[1] should be an EA priority[2]”. The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome posts, comments, quick takes and link-posts from any Forum user who is interested. All participating posts should be tagged with the AI Welfare Debate Week tag. 

We will be experimenting with a banner where users can mark how strongly they agree or disagree with the debate statement, and a system which uses the posts we record as changing your mind to produce a list of the most influential posts. 

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Illustration found on Better Images of AI

Should AI welfare be an EA priority?

AI welfare — the capacity of digital minds to feel pleasure, pain, happiness, suffering, satisfaction, frustration, or other morally significant welfare states — appears in many of the best and worst visions of the future. If we consider the value of the future from an impartial welfarist perspective, and if digital minds of comparable moral significance to humans are far easier to create than humans, then the majority of future moral patients may be digital. Even if they don’t make up the majority of minds, the total number of digital minds in the future could be vast. 

The most tractable period to influence the future treatment of digital minds may be limited. We may have decades or less to advocate against the creation of digital minds (if that were the right thing to do), and perhaps not much longer than that to advocate for proper consideration of the welfare or rights of digital minds if they are created. 

Therefore, gaining a better understanding of the likely paths in front of us, including the ways in which the EA community could be involved, is crucial. The sooner, the better. 

My hopes for this debate

Take these all with a pinch of salt, the debate is for you, these are my (Toby’s) opinions. 

FAQs

How does the banner work?

The banner will show the distribution of the EA Forum’s opinion on the debate question. Users can place their icon anywhere on the axis to indicate their opinion, and can move it as many times as they like during the week. 

Some users might prefer not to see the distribution of the Forum's opinion on the question until the end of the week, so as not to bias their own vote. For this reason, you must click "view results" on the banner in order to see other user's votes. 

Voting on the banner is non-anonymous. You can reset your vote by hovering over your icon and clicking the "x".

How are the “most influential posts” calculated? 

Under the banner, you’ll be able to see a leaderboard of “most influential posts”. When you change your mind and move your avatar on the debate slider, you will be prompted to select the debate week posts which influenced you. These posts will be assigned points based on how far you moved your avatar. You can vote as many times as you like, but only your largest mind change will be recorded for each cited post. The post with the most points will be at the top of the most influential posts list. 

Do I have to write in the style of a debate?

No. The aim of this debate week is to elicit interesting content which changes the audience’s mind. This could be in the form of a debate-style argument for accepting or rejecting the debate proposition. However, the most influential posts could also be link-posts, book reviews, or bullet-point lists of the cruxes in the debate. Don’t feel constrained to a form which doesn’t fit the content you’d like to contribute. 

Further Readings

This list is incomplete, you can help by expanding it. I'll edit suggestions into the post. 

  1. ^

    By AI welfare, I mean the potential wellbeing (pain, pleasure, but also frustration, satisfaction etc...) of future artificial intelligence systems. 

  2. ^

    By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause. 


SiebeRozendal @ 2024-06-25T07:40 (+8)

I like this!

Relevant context for those unaware: supposedly, Good Ventures (and by extension OpenPhil) has recently decided to pull out of funding artificial sentience.

Can you give some examples of topics that qualify and some that don't qualify as "EA priorities"?

I feel like for the purpose of getting the debate started, the vague question is fine. For the purpose of measuring agreement/disagreement and actually directly debating the statement, it's potentially problematic. Does EA as a whole have priorities? How much of a priority should it be?

Toby Tremlett @ 2024-06-27T09:44 (+7)

Interesting distinction, thank you!
I'm thinking of a chart like this, which represents descriptive or revealed "EA Priorities"
 

(Link to spreadsheet here, and original Forum post here). The question is (roughly) whether Artificial Welfare should take up 5% of that right hand side bar or not. And also similar for EA talent distribution (which I don't have a graph to hand for). 

As a more general point- I think we can say that EA has priorities, insofar as funders and individuals, in their self-reported EA decisions, clearly have priorities. We will be arguing about prescriptive priorities (what EAs should do), but paying attention to descriptive priorities (what EAs already do). 

Leo @ 2024-07-04T09:13 (+5)

This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.

Toby Tremlett @ 2024-07-04T09:18 (+2)

Good point (I address similar concerns here). For the time being, personally I would treat a half agree as some percentage under 5%, and explain your vote in the discussion thread if you want to make sure that people know what you mean. 

Leo @ 2024-07-04T10:09 (+1)

I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.  

Toby Tremlett @ 2024-07-04T10:22 (+4)

Makes sense Leo, thanks. I don't want to change anything very substantial about the banner after so many users have voted, but I'll bear this in mind for next time. 

finm @ 2024-07-03T13:10 (+5)

Just I want to register the worry that the way you've operationalised “EA priority” might not line up with a natural reading of the question. 

The footnote on “EA priority” says:

By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.

This is a bit ambiguous (in particular, over what timescale), but if it means something like “over the next year” then that would mean finding ways to spend ≈$10 million on AI welfare by the end of 2025, which you might think is just practically very hard to do even if you thought that more work on current margins is highly valuable. Similar things could have been said for e.g. pandemic prevention or AI governance in the early days!

JP Addison @ 2024-06-18T21:16 (+4)

Maybe halfway relevant: An Argument for Why the Future May Be Good by @Ben_West.