Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest

By Jason Schukraft @ 2022-11-21T21:45 (+291)

[Update 3: The winners have been selected and notified and will be publicly announced no later than the end of September.]

[Update 2: The contest has now officially launched! See here for the announcement.]

[Update: Work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prizes. If you are sitting on great research, there's no need to delay posting until the formal contest announcement in 2023.]

At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.

We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely  that competition is no longer moving forward.

We recognize that many people have already invested significant time and thought into their contest entries. We don’t want that effort to be wasted, and we want to incentivize further work in the same vein. For these reasons, Open Phil will run its own AI Worldviews Contest in early 2023

To be clear, this is a new contest, not a continuation of the Future Fund competition. There will be substantial differences, including:

The spirit and purpose of the two competitions, however, remains the same. We expect it will be easy to adapt Future Fund submissions for the Open Phil contest.

More details will be published when we formally announce the competition in early 2023. We are releasing this post now to try to alleviate some of the fear, uncertainty, and doubt surrounding the old Future Fund competition and also to capture some of the value that has already been generated by the Future Fund competition before it dissipates.

We are still figuring out the logistics of the competition, and as such we are not yet in a position to answer many concrete questions (e.g., about deadlines or prize amounts). Nonetheless, if you have questions about the contest you think we might be able to answer, you can leave them as comments below, and we will do our best to answer them over the next few weeks.


Peter Wildeford @ 2022-11-21T22:04 (+33)

Thank you so much for doing this!

Also the implicit extended timeline (no longer due Dec 23) is also very welcomed.

c.trout @ 2022-11-22T22:34 (+26)

Thank you for carrying this forward! 

One comment: as with the old contest, I strongly support the decision to use a number of judges that are outside of/independent from EA. I fear the EA bubble is becoming something of an echo chamber: this is a great opportunity to verify whether such fears are well-founded, and if so, provide a check on this detrimental effect.

Michael_Cohen @ 2022-11-22T12:28 (+13)

Glad to hear about this!

I have a recommendation for the structure of it. I'd recommend that anonymous reviewers review submissions and share their reviews with the authors (perhaps privately) before a rebuttal phase (also perhaps private). And then reviewers can revise their reviews, and then chairs can make judgments about which submissions to publish.

Jordan Arel @ 2022-11-22T21:53 (+12)

Fantastic news!!! My main question:

The Future Fund AI Worldview Prize had specific, very bold criteria, such as raising or lowering to certain thresholds the probability estimates of transformative AI timelines or probabilities of an AI related catastrophe, given certain timelines;

Will this AI Worldview Prize have very similar criteria, or do you have any intuitions what these criteria might be?

This would be very helpful for researchers like myself deciding whether to continue on a particular line of research!

Zach Stein-Perlman @ 2022-11-21T22:47 (+9)

I'm not sure contests like this are a good idea, but pre-announced contests are better than spontaneous contests in cases like this, so yay.

It would be even better if you clarified that current posts are eligible, so that people don't save their posts until you announce the details.

Quadratic Reciprocity @ 2022-11-22T12:46 (+12)

What are the reasons against contests like this being a good idea?

Zach Stein-Perlman @ 2022-11-22T20:03 (+3)

I might write this up someday, but briefly:

  1. I'm skeptical that they increase quality-adjusted work going into the area much (particularly if you subtract the value of the work that people would have done if not for the contest).
  2. I'm skeptical that they better-distribute work within the area well.
  3. I'm skeptical that they redistribute money well.
  4. I'm skeptical that they have much other benefits.

(Edit: that said, some contests can certainly achieve #1, and some can certainly have substantial other benefits.)

porby @ 2022-11-23T05:19 (+8)

As one datapoint, the time spent on my entry to the original worldview prize was strictly additive. I have a grant to do AI safetystuff part time, and I still did all of that work; the work I didn't do that week was all non-AI business.

It's extremely unlikely that I would have written that post without the prize or some other financial incentive. So, to the extent that my post had value, the prize helped make it happen.

That said, when I saw another recent prize, I did notice the incentive for me to conceal information to increase the novelty of my submission. I went ahead and posted that information anyway because that's not the kind of incentive I want to pay attention to, but I can see how the competitive frame could have unwanted side effects.

Jason @ 2022-11-22T00:01 (+6)

Even more specifically, it would be helpful to confirm that work published on or after the date of the Future Fund's announcement on 23rd Sep 2022 is eligible (if that is actually the case).

Jason Schukraft @ 2022-12-08T14:42 (+2)

Thanks Jason. I can now confirm that that is indeed the case!

basil.halperin @ 2022-11-23T18:11 (+1)

^seconding this question 😊

Jason Schukraft @ 2022-12-08T14:40 (+2)

Hi Zach, thanks for the question and apologies for the long delay in my response. I'm happy to confirm that work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prize. No need to save your work until the formal announcement.

paul_dfr @ 2023-02-05T00:35 (+5)

Thank you for organizing this! I have two questions. First, is there any update regarding when the official announcement will be made? Second, will essays submitted to other competitions, or for publication, be eligible? In other other words, is there any risk that submitting research elsewhere prior to the announcement of the competition will render it ineligible for the competition?

Jason Schukraft @ 2023-02-06T21:12 (+7)

Thanks for your questions!

We plan to officially launch the contest sometime in Q1 2023, so end of March at the latest.

I asked our in-house counsel about the eligibility of essays submitted to other competitions/publications, and he said it depends on whether by submitting elsewhere you've forfeited your ability to grant Open Phil a license to use the essay. His full quote below:

Essays submitted to other competitions or for publication are eligible for submission, so long as the entrant is able to grant Open Phil a license to use the essay. Since we plan to use these essays to inform our future research and grantmaking, we need a license to be able to use the IP. Our contest rules will state that by submitting an entry, each entrant grants a license to Open Phil to use the entry to further our mission. If you had previously submitted an essay to another contest or for publication, you should check the terms and conditions of that contest/publication to confirm they do not now have exclusive rights to the work or in any way prohibit you from granting a license to someone else to use it.

paul_dfr @ 2023-02-08T05:13 (+1)

Thanks for a great answer! That's very helpful.

srhoades10 @ 2023-03-01T01:40 (+1)

Hello, checking in on any updates to the Open Phil contest. I look forward to submitting an entry soon!

Jason Schukraft @ 2023-03-01T13:13 (+3)

We are just ironing out the final legal details. The official announcement will hopefully go live by the end of next week. Thanks for checking!

Noah Scales @ 2022-11-23T04:00 (+4)

Here are a few ideas relevant to an AI contest that could be helpful:

I think it's a good idea to continue the prize to the extent that it encourages AI safety research directly. My impression of the original prize was that it could encourage AGI development without necessarily encouraging AI Safety development, because its questions required more knowledge and consideration of AGI development than of AGI safety.

Greg_Colbourn @ 2022-11-25T10:01 (+3)

Awesome news, thanks! Looking forward to hearing more about the operationalization, and logistics. 

Wondering if there could be a way to incorporate the fact that doom is conditional on year that TAI is developed (i.e how well developed AI Alignment/strategy/governance is when TAI is possible)? P(doom|TAI in year 20xx) and P(10% chance of TAI in year 20xx) are both important questions.

Steven Cuppen @ 2022-12-20T14:11 (+2)

The FTX contest description listed "two formidable problems for humanity": 

"1. Loss of control to AI systems
Advanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.

2. Concentration of power
Actors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future."

My sense is that the contest is largely framed around (1) at the neglect (2). Nick Beckstead's rationale behind his current views is based around a scenario involving power seeking AI, whereas arguably scenarios related to (2) don't require the existence of AGI in the first place. which is central to the main forecasting question. It seems AI developments short of AGI could be enough for all sorts of disruptive changes with catastrophic consequences, for instance in geopolitics. 

Based on my limited understanding, I'm often surprised how little focus there is within the AI safety community on human misuse of (non-general) AI. In addition to not requiring controversial assumptions about AGI, these problems also seems more tractable since we can extrapolate from exisiting social science and have a clearer sense of what the problems could look like in practice. This might mean we can forecast more accurately, and my current sense is that it's not obvious AI-related catastrophic consequences are more likely to come from AGI than human misuse (of non-AGI). 

Maybe it would be helpful to frame the contest more broadly around catastrophic consequences resulting from AI. 

Dan Oblinger @ 2022-12-02T18:28 (+2)

Supporting the community with this new competition is quite valuable.  Thanks!

 

Here is an idea for how your impact might be amplified:  For ever researcher that is somehow has full time funding to do AI safety research I suspect there are 10 qualified researchers with interest and novel ideas to contribute, but who will likely never be full time funded for AI safety work.  Prizes like these can enable this much larger community to participate in a very capital efficient way.

But such "part time" contributions are likely to unfold over longer periods, and ideally would involve significant feedback from the full-time community in order to maximize the value of those contributions.

The previous prize required that all submissions be of never before published work.  I understand the reasoning here.  They wanted to foster NEW work.  Still this rule drops a wet blanket on any part-timer who might want to gain feedback on ideas over time.

Here is an alternate rule that might have fewer unintended side effects:  Only the portions of ones work that has never been awarded prize money in the past is eligible for consideration.

Such a rule would allow a part-timer to refine an important contribution with extensive feedback from the community over an extended period of time.  Biasing towards fewer higher quality contributions in a field with so much uncertainty seems a worthy goal.  Biasing towards greater numbers of contributors in such a small field also seems valuable from a diversity in thinking perspective too.

TedSanders @ 2023-03-12T06:52 (+1)

Any update on when "early 2023" will be?

Lorenzo Buonanno @ 2023-03-12T09:43 (+5)

It was announced two days ago: https://forum.effectivealtruism.org/posts/NZz3Das7jFdCBN9zH/announcing-the-open-philanthropy-ai-worldviews-contest

Jason Schukraft @ 2023-03-13T12:59 (+3)

Thanks both - I just added the announcement link to the top of this page.

Jotto @ 2022-12-10T19:48 (+1)

Thank you, I was struggling to finish a forecasting essay for the Future Fund's prize.  I intend to submit something regardless of whether there's prize money, but prize money surely would help orient effort anyway.  Resources are finite.