Announcing the Open Philanthropy AI Worldviews Contest
By Jason Schukraft, Peter Favaloro @ 2023-03-10T02:33 (+137)
This is a linkpost to https://www.openphilanthropy.org/open-philanthropy-ai-worldviews-contest/
Update: We've now chosen the winning entries; you can read them here.
We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest.
The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly.
The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.
Prize Conditions and Amounts
Essays should address one of these two questions:
Question 1: What is the probability that AGI is developed by January 1, 2043?[1]
Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
Essays should be clearly targeted at one of the questions, not both.
Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist:
- An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070.
- An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate.
- An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate).
We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI[2] broadly represents the views of the panel.
Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%.
We will award a total of six prizes across three tiers:
- First prize (two awards): $50,000
- Second prize (two awards): $37,500
- Third prize (two awards): $25,000
Eligibility
- Submissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023.
- All authors must be 18 years or older.
- Submissions must be written in English.
- No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references).
- Open Phil employees and their immediate family members are ineligible.
- The following groups are also ineligible:
- People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by law
- People who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing)
- You can submit as many entries as you want, but you can only win one prize.
- Co-authorship is fine.
- See here for additional details and fine print.
Submission
Use this form to submit your entries. We strongly encourage (but do not require) that you post your entry on the EA Forum and/or LessWrong. However, if your essay contains infohazardous material, please do not post the essay publicly.
Note that submissions will be hosted on a Google server and viewable by Open Phil staff. We don’t think that (m)any submissions will warrant more security than this. However, if you believe that your submission merits a more secure procedure, reach out to AIWorldviewsContest@openphilanthropy.org, and we will make appropriate arrangements.
Judging Process and Criteria
There will be three rounds of judging.
Round 1: An initial screening panel will evaluate all submitted essays by blind grading to determine whether each essay is a good-faith entry. All good-faith entries will advance to Round 2.
Round 2: Out of the good-faith entries advancing from Round 1, a panel of judges will select at least twenty-four finalists.
Round 3: Out of the finalists advancing from Round 2, the judges will select two first-place entries, two second-place entries, and two third-place entries.
In Rounds 2 and 3, the judges will make their decision using the criteria described below:
- The extent to which an essay uncovers considerations that change a judge’s beliefs about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (67%)
- The extent to which an essay clarifies the underlying concepts that ought to inform one’s views about the probability of AGI arriving by 2043 or the threat that AGI systems might pose. (33%)
Questions?
Please email AIWorldviewsContest@openphilanthropy.org with any questions, comments, or concerns.
- ^
By “AGI” we mean something like “AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less.” AGI is a notoriously thorny concept to define precisely. What we’re actually interested in is the potential existential threat posed by advanced AI systems. To that end, we welcome submissions that are oriented around related concepts, such as transformative AI, human-level AI, or PASTA.
- ^
This includes the research published on our website, as well as material from Ajeya Cotra, Holden Karnofsky, Joe Carlsmith, and Tom Davidson.
David Thorstad @ 2023-03-12T19:06 (+31)
Would you consider incorporating a broader panel of judges? OpenPhil, and effective altruists more generally, tends to have quite strong and niche views about the future of artificial intelligence. This contest would have broader credibility if it were evaluated from a set of perspectives more consistent with, and representative of the field as a whole.
Jason Schukraft @ 2023-03-13T12:16 (+30)
Hi David,
Thanks for your comment. I am also concerned about groupthink within homogenous communities. I hope this contest is one small push against groupthink at Open Phil. By default, I do, unfortunately, expect most of the submissions to come from people who share the same basic worldview as Open Phil staff. And for submissions that come from people with radically different worldviews, there is the danger that we fail to recognize an excellent point because we are less familiar with the stylistic and epistemic conventions within which it is embedded.
For these sorts of reasons, we did explicitly consider including non-Open Phil judges for the contest. Ultimately, we decided that didn’t make sense for this use case. We are, after all, hoping that submissions update our thinking, and it’s harder for an outside judge to represent our point of view.
But this contest is not the only way we are stress-testing our thinking. For example, I’m involved in another project in which we are engaging directly with smart people who disagree with us about AI risk. We hope that as a result of that adversarial collaboration, we can generate a consensus of cruxes so that we have a better handle on how new developments ought to change our credences. I hope to be able to share more details on that project over the summer.
If you want to chat more about groupthink concerns, shoot me a DM. I believe it’s a somewhat underappreciated worry within EA.
David Mathers @ 2023-03-14T16:45 (+11)
You could of course commit to acting on some kind of judgment of some diverse group you think worth differing too, rather than acting on your own opinion. One way to understand what David Thorstad is asking (which he might or might not endorse) is why you don't do that given it would (allegedly) mean acting on a more-like-to-be-correct opinion, rather than one that is less-likely to be correct. From that point of view, it's just missing the point to say 'we're trying to get our opinion updated', because you shouldn't be using your opinions, rather than some properly diverse groups opinions to be setting policy in general.
Larks @ 2023-03-12T19:52 (+23)
The objective of the contest isn't to farm prestige or credibility with some hypothetical third party, it's to inform OpenPhil's work, and it seems very likely OpenPhil is by far the best judge of that.
The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk.
David Thorstad @ 2023-03-12T20:58 (+9)
There is a growing consensus among social scientists that diversity of approaches and perspectives is essential to reaching truth and avoiding bias. Most theorists now believe that deliberation among homogenous groups is likely to lead to groupthink, polarization, extremism, and other forms of failed group deliberation. They emphasize that the risk is especially strong in self-selecting groups, as well as in groups where discourse is heavily concentrated on the internet.
Many social scientists would tend to think that effective altruists fit most of the risk factors for failures of group deliberation. They would tend to think that if effective altruists are concerned with finding the truth about risks posed by future developments in artificial intelligence, effective altruists would do well to draw from a wider range of perspectives and approaches. They would tend to see developments such as this contest as primarily confirmatory, unlikely to substantially shift group views and quite likely to reinforce them. They would suggest that those developments could be redesigned in a more truth-seeking way by incorporating a broader range of perspectives in deliberation.
Larks @ 2023-03-13T02:26 (+13)
These seem like arguments for OpenPhil to hire people with a broad range of perspectives, and to solicit contest submissions from a broad range of people, but not to adjust the judges. It doesn't benefit OpenPhil at all if, having put e.g. a social conservative on the board of judges, the winner does so by appealing to her with arguments that OpenPhil does not find compelling. OpenPhil is uniquely qualified to judge what arguments they have found informative.
David Thorstad @ 2023-03-14T15:23 (+4)
It might be worth considering whether the goal of this contest is to produce arguments that OpenPhil finds compelling and informative, or to produce arguments that are compelling and informative.
These would not be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that OpenPhil finds compelling and informative. The best way for OpenPhil to produce arguments that OpenPhil finds compelling and informative would be to select judges exclusively from its own membership, and that is what they have done.
They would instead be arguments in favor of the conclusion that a broader range of perspectives is a useful way to produce arguments that actually are compelling and informative, as well as to avoid a number of known biases and failure modes in group deliberation.
David Mathers @ 2023-03-14T16:43 (+2)
What procedure would you recommend for how Open Phil chooses between allocating money to AI versus allocating it to other causes? Would you recommend essentially the same procedure for:
- A university deciding whether to fund a new department
-A local council deciding what to budget cuts to make, after an unexpected loss of central government funding
- A CEO setting corporate strategy
?
David Thorstad @ 2023-03-14T17:48 (+14)
Ordinarily, a philanthropic foundation offering a prize meant to advance scientific understanding of some topic X would put at mot one or two of its own members on the prize panel. The rest of the panel would be composed of leading scientists, academics, industry professionals, and perhaps a few policymakers. They might also consider inviting leaders of relevant foundations. Most members of the panel would be chosen for specific expertise in topic X combined with broad respect and experience within their fields, although a few panelists might be chosen to represent generalist constituencies (for example, a university president). Members would typically be at mid- or late-career stages, and have substantial research records of their own as well as the esteem of their peers. They might, as appropriate, draw on a broader pool of peer reviewers or nominators in early rounds of the selection process.
Prizes would typically be broadly advertised, and left open for a sufficient period to allow original research (at least six months). They would encourage submissions of a standard length for original research contributions, rather than discouraging submissions greater than 5,000 words. If the focus was solely on the individual piece of submitted work, the review process would be double- or triple-blinded and announced as such.
I could go on, but I take it that all of the above are fairly standard.
David Mathers @ 2023-03-14T18:35 (+10)
Point taken: I have a better idea what you mean you make it concrete in that way.
Nathan Young @ 2023-05-05T17:08 (+15)
I dunno, I feel like I want OpenPhil to reward based on that changes their minds. Having external judges feels kind of meaningless.
David Thorstad @ 2023-05-05T19:01 (+4)
It would be more likely to find the truth
Nathan Young @ 2023-05-06T01:36 (+15)
But they want specific changes to their plans.
David Mathers @ 2023-03-14T19:18 (+9)
It'd be nice if some of the people who disagree voted here could say why they think using outside judges would be a bad idea.
Chris Leong @ 2023-05-06T02:17 (+8)
It's not necessary that it would be a bad idea. It's just that there's two different ways to run this competition and I think that the way OpenPhilanthropy is doing it is fine and there's no need to push them to change.
On the other hand, it also make sense for them to attempt to set up another competition afterward that attempts to form more of a consensus on this issue.
David Mathers @ 2023-03-14T16:25 (+4)
Interestingly, the belief that there is an X-risk from AI might not be all that niche, relative to the US public as whole, though obviously Open Phil probably has other views that are niche in that context:
https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/
'A majority (55%) of Americans are now worried at least somewhat that artificially intelligent machines could one day pose a risk to the human race’s existence. ' Of course, it's unclear exactly what "could" means in this sort of context. But Monmouth is a reputable pollster I think(?) and not everyone at Open Phil. is a Yudkowsky style doomer who thinks doom is near certain.
Not that this means your wrong to say they are niche in "the field", whatever exactly that is. (And to be clear, I actually am inclined to agree that having judges from outside Open Phil. with different views would be in theory an improvement.)
EDIT: To be clear, I personally think it is very unlikely (maybe 1 in 1000) that we will go extinct because of misaligned AI by 2100, so I'm not just defending a view I hold here.
David Johnston @ 2023-03-10T03:13 (+8)
Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
Requesting a few clarifications:
- I think of existential catastrophes as things like near-term extinction rather than things like "the future is substantially worse than it could have been". Alternatively, I tend to think that existential catastrophe means a future that's much worse than technological stagnation, rather than one that's much worse than it would have been with more aligned AI. What do you think?
- Are we considering "loss of control over an AGI system" as a loss of control over a somewhat monolithic thing with a well-defined control interface, or is losing control over an ecosystem of AGIs also of interest here?
Jason Schukraft @ 2023-03-10T13:15 (+4)
Hi David,
Thanks for your questions. We're interested in a wide range of considerations. It's debatable whether human-originating civilization failing to make good use of its "cosmic endowment" constitutes an existential catastrophe. If you want to focus on more recognizable catastrophes (such as extinction, unrecoverable civilizational collapse, or dystopia) that would be fine.
In a similar vein, if you think there is an important scenario in which humanity suffers an existential catastrophe by collectively losing control over an ecosystem of AGIs, that would also be an acceptable topic.
Let me know if you have any other questions!
Mitchell Reynolds @ 2023-04-07T21:51 (+3)
For Question 2, should each submission define what timeframe they're considering for "will suffer"?
Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
I understand two timeframes here - one explicit and one implicit. The explicit timeframe of "by 2070" makes sense to me.
The implicit timeframe of "will suffer" is ambiguous to me and therefore should be defined in the submission. Open Philanthropy seems to emphasize this century's importance. I plan to limit my estimate and reasoning to be "catastrophe by the end of this century." For this contest, it seems unlikely the judges want to understand the tail of the yearly distribution (i.e. AGI deployed by 2070 but goes rouge in 2500 for some esoteric reason).
paul_dfr @ 2023-05-15T15:50 (+1)
You note above that you encourage posting the entry as a forum post. Do have preferences regarding entries to be written primarily as research papers or as forum posts? I imagine this making some difference to style and referencing.
Jason Schukraft @ 2023-05-15T17:22 (+3)
Hi Paul, thanks for your question. I don't have an intrinsic preference. We encourage public posting of the entries because we believe that this type of investigation is potentially valuable beyond the narrow halls of Open Philanthropy. If your target audience (aside from the contest panelists) is primarily researchers, then it makes sense to format your entry according to the norms of the research community. If you are aiming for a broader target audience, then it may make sense to structure your entry more informally.
When we grade the entries, we will be focused on the content. The style and reference won't (I hope) make much of a difference.
paul_dfr @ 2023-05-15T18:01 (+1)
That's very helpful, thank you!
NicholasKross @ 2023-04-25T01:35 (+1)
How would you respond to essays that are substantially or mostly in the form of bullet-points, lists, tables, and other information organization methods besides prose? (Prior discussion here, here, and here, to get a sense of why I'm interested in doing this.)
Jason Schukraft @ 2023-04-26T19:49 (+6)
Hi Nicholas,
The details and execution probably matter a lot, but in general I'm fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.
NicholasKross @ 2023-04-29T02:10 (+1)
OK, thanks! Also, after more consideration and object-level thinking about the questions, I will probably write a good bit of prose anyway.
NicholasKross @ 2023-04-15T22:09 (+1)
I have a question.
IF:
- we can submit multiple entries (but only one will win), AND
- judging is based on 67% uncovering considerations and 33% clarifying concepts,
THEN, would you prefer if I:
- make one large entry that puts all my research/ideas/information in one place, OR
- make several smaller entries, each one focusing on a single idea?
(Assuming this is for answering one question. Presumably, since multiple entries are allowed, I could duplicate this strategy for the other question, or even use a different one for each. But if I'm wrong about this, I'd also like to know that!)
Jason Schukraft @ 2023-04-18T23:46 (+6)
Hi Nicholas,
Thanks for your question. It's a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we're advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I'd prefer three 5000 word entries to one 15,000 word entry, and I'd prefer one 5000 word entry to ten 500 word entries.
Hope this helps.
Jason
NicholasKross @ 2023-04-20T02:48 (+1)
These details help, thank you!
Phil Shirts @ 2023-03-10T04:04 (+1)
I guess the requirement that all authors be eighteen years of age or older rules out all(?) current high performing non-human generated essay sources, eg, from Virtual Beings generically, chatGPT, etc. In general, though, can AI/ML instances be used in the generation of raw text for this contest, besides as an example?
Jason Schukraft @ 2023-03-10T13:20 (+3)
Hi Phil - just to clarify: the entries must entirely be the original work of the author(s). You can cite others and you can use AI-generated text as an example, but for everything that is not explicitly flagged as someone else's work, we will assume it is original to the author.