Announcing AI Welfare Debate Week (July 1-7)
By Toby Tremlettđš @ 2024-06-18T08:06 (+84)
July 1-7 will be AI Welfare Debate Week on the EA Forum. We will be discussing the debate statement: âAI welfare[1] should be an EA priority[2]â. The Forum team will be contacting authors who are well-versed in this topic to post, but we also welcome posts, comments, quick takes and link-posts from any Forum user who is interested. All participating posts should be tagged with the AI Welfare Debate Week tag.
We will be experimenting with a banner where users can mark how strongly they agree or disagree with the debate statement, and a system which uses the posts we record as changing your mind to produce a list of the most influential posts.
Should AI welfare be an EA priority?
AI welfare â the capacity of digital minds to feel pleasure, pain, happiness, suffering, satisfaction, frustration, or other morally significant welfare states â appears in many of the best and worst visions of the future. If we consider the value of the future from an impartial welfarist perspective, and if digital minds of comparable moral significance to humans are far easier to create than humans, then the majority of future moral patients may be digital. Even if they donât make up the majority of minds, the total number of digital minds in the future could be vast.
The most tractable period to influence the future treatment of digital minds may be limited. We may have decades or less to advocate against the creation of digital minds (if that were the right thing to do), and perhaps not much longer than that to advocate for proper consideration of the welfare or rights of digital minds if they are created.
Therefore, gaining a better understanding of the likely paths in front of us, including the ways in which the EA community could be involved, is crucial. The sooner, the better.
My hopes for this debate
Take these all with a pinch of salt, the debate is for you, these are my (Tobyâs) opinions.
- Iâd like to see discussion focus on digital minds and AI welfare rather than AI in general.
- There will doubtless be valuable discussion comparing artificial welfare to other causes, but the most interesting arguments are likely to focus on the merits or demerits of this cause. In other words, itâd be less interesting (for me at least) to see familiar arguments that one cause should dominate EA funding or that another cause should not be funded by EA, even though both arguments would be ways to push towards agree or disagree on the debate statement.
- Iâd rather we didnât spend too high a percentage of the debate on the question of whether AI will ever be sentient, although we will have to decide how to deal with the uncertainty here.
FAQs
How does the banner work?
The banner will show the distribution of the EA Forumâs opinion on the debate question. Users can place their icon anywhere on the axis to indicate their opinion, and can move it as many times as they like during the week.
Some users might prefer not to see the distribution of the Forum's opinion on the question until the end of the week, so as not to bias their own vote. For this reason, you must click "view results" on the banner in order to see other user's votes.
Voting on the banner is non-anonymous. You can reset your vote by hovering over your icon and clicking the "x".
How are the âmost influential postsâ calculated?
Under the banner, youâll be able to see a leaderboard of âmost influential postsâ. When you change your mind and move your avatar on the debate slider, you will be prompted to select the debate week posts which influenced you. These posts will be assigned points based on how far you moved your avatar. You can vote as many times as you like, but only your largest mind change will be recorded for each cited post. The post with the most points will be at the top of the most influential posts list.
Do I have to write in the style of a debate?
No. The aim of this debate week is to elicit interesting content which changes the audienceâs mind. This could be in the form of a debate-style argument for accepting or rejecting the debate proposition. However, the most influential posts could also be link-posts, book reviews, or bullet-point lists of the cruxes in the debate. Donât feel constrained to a form which doesnât fit the content youâd like to contribute.
Further Readings
- A good post by Rob Long to scour for debate cruxes: Key questions about artificial sentience: an opinionated guide.
- Another post focused on cruxes, or crucial considerations, which are applicable to this debate: prioritization questions for artificial sentience.
- Jeff Sebo's Forum post making the case for AI Welfare Research and sketching some directions it could go: Principles for AI Welfare Research.
- Both Rob Long and Jeff Sebo have discussed digital minds and AI sentience on the 80,000 Hours Podcast. They also co- wrote a paper on Moral Considerations for AI systems by 2030.
- More potentially crucial considerations:
- Carl Shulman and Nick Bostrom's paper on Digital Minds, including those which are super-beneficiaries, i.e. who would have more potential wellbeing than us, even if there were less of them.
- An even larger list of considerations can be found in Shulman and Bostrom's Propositions concerning Digital Minds and Society.
- On the risks of misattributing sentience to non-sentient AI: How worried should I be about a childless Disneyland? + (p-)Zombie Universe: another X-risk.
- Carl Shulman and Nick Bostrom's paper on Digital Minds, including those which are super-beneficiaries, i.e. who would have more potential wellbeing than us, even if there were less of them.
- The Forum Tag for Artificial Sentience.
This list is incomplete, you can help by expanding it. I'll edit suggestions into the post.
- ^
By AI welfare, I mean the potential wellbeing (pain, pleasure, but also frustration, satisfaction etc...) of future artificial intelligence systems.
- ^
By âEA priorityâ I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.
SiebeRozendal @ 2024-06-25T07:40 (+8)
I like this!
Relevant context for those unaware: supposedly, Good Ventures (and by extension OpenPhil) has recently decided to pull out of funding artificial sentience.
Can you give some examples of topics that qualify and some that don't qualify as "EA priorities"?
I feel like for the purpose of getting the debate started, the vague question is fine. For the purpose of measuring agreement/disagreement and actually directly debating the statement, it's potentially problematic. Does EA as a whole have priorities? How much of a priority should it be?
Toby Tremlett @ 2024-06-27T09:44 (+7)
Interesting distinction, thank you!
I'm thinking of a chart like this, which represents descriptive or revealed "EA Priorities"
(Link to spreadsheet here, and original Forum post here). The question is (roughly) whether Artificial Welfare should take up 5% of that right hand side bar or not. And also similar for EA talent distribution (which I don't have a graph to hand for).
As a more general point- I think we can say that EA has priorities, insofar as funders and individuals, in their self-reported EA decisions, clearly have priorities. We will be arguing about prescriptive priorities (what EAs should do), but paying attention to descriptive priorities (what EAs already do).
Leo @ 2024-07-04T09:13 (+5)
This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.
Toby Tremlett @ 2024-07-04T09:18 (+2)
Good point (I address similar concerns here). For the time being, personally I would treat a half agree as some percentage under 5%, and explain your vote in the discussion thread if you want to make sure that people know what you mean.
Leo @ 2024-07-04T10:09 (+1)
I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.
Toby Tremlett @ 2024-07-04T10:22 (+4)
Makes sense Leo, thanks. I don't want to change anything very substantial about the banner after so many users have voted, but I'll bear this in mind for next time.
finm @ 2024-07-03T13:10 (+5)
Just I want to register the worry that the way you've operationalised âEA priorityâ might not line up with a natural reading of the question.
The footnote on âEA priorityâ says:
By âEA priorityâ I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.
This is a bit ambiguous (in particular, over what timescale), but if it means something like âover the next yearâ then that would mean finding ways to spend â$10 million on AI welfare by the end of 2025, which you might think is just practically very hard to do even if you thought that more work on current margins is highly valuable. Similar things could have been said for e.g. pandemic prevention or AI governance in the early days!
JP Addison @ 2024-06-18T21:16 (+4)
Maybe halfway relevant: An Argument for Why the Future May Be Good by @Ben_West.