Notes on how prizes may fail and how to reduce the risk of them failing
By Peter Wildeford @ 2022-08-30T18:57 (+89)
A lot of organizations in EA are working on prizes right now. I'm skeptical that they will work well without some particular effort - here are my some thoughts on them.
Though looking at the OP cause prioritization prize, Effective Ideas Prize, and the EA Criticism Prize, it seems like a lot of prizes are actually going pretty well so far.
(This is an experiment where ordinarily I would've spent time to make this a longer post, but after a few months I'm clearly not going to make time for this so instead I'm just going to post the outline to my post without working hard to turn it into a proper post. Let me know what you think about this format and I may do it more. Further note that this a personal post written on my own and not as a result of my role at Rethink Priorities.)
Why prizes are good
- It’s easier to evaluate retrospectively than prospectively - you only have to pay for the things you like (or at least whatever you like most).
- It lets people audition for a role, potentially letting talent shine that wouldn't otherwise get recognized ex ante.
- They raise the salience of a particular need of grantmaker(s).
- They're helpful for providing explicit encouragement to get people to do things they probably would've been happy to do anyway.
- They provide credentials people can use in the future (e.g., "I was winner of {PRIZE}" goes on the resume).
Ways prizes fail
- Prizes are often not large enough to produce sufficient expected value
- Even if the expected value is good, people are not (an should not be) risk neutral with regard to personal finances
- Even if the expected value is good, people frequently do not have the startup capital to float to get the prize
- There are illusions of transparency where prize makers think they have clearly articulated what they want but they don’t, which leads to a lot of wasted time and disappointment
- There is counterparty risk where someone taking on the personal risk of doing the prize may do well but then not be evaluated the way they expect
Ideas on how to make prizes better
- Explicitly attempt to calculate expected value for participants and then offer a large enough prize
- Be open to increasing the prize amount
- In addition to the prize, offer to evaluate people/teams prospectively as best you can and give them start up capital to attempt the prize
- Have smaller milestones that result in prizes to ensure people are on the same page and to reduce risk (the OP cause prioritization prize is great at this with honorable mentions)
- Offer lots of smaller prizes to honorable mentions/attempts to smooth out risk
- Invest a lot of time upfront to internally understand and externally communicate what you want and how you will judge things
Jakob @ 2022-08-30T19:12 (+14)
Thank you for writing this up - I’ve wanted to do the same for a while! I think the only thing I see missing is that prizes can raise the salience of some concept or nuance, and therefore serve as a coordination mechanism in more ways than you list (e.g., say that we want more assessments of long-term interventions using the framework from WWOTF of significance - durability - contingency, then a prize for those assessments would also help signal boost the framework)
Peter Wildeford @ 2022-08-30T19:50 (+5)
Cool! I added that
Peter Wildeford @ 2022-08-30T19:49 (+3)
Cool! I added that
JulianHazell @ 2022-08-30T20:28 (+2)
+1
I also think another similar bonus is that prizes can sometimes get people to do EA things who otherwise wouldn’t have done EA things counterfactually.
E.g., some prize on alignment work could plausibly be done by computer scientists who otherwise would be doing other things.
This could signal boost EA/the cause area more generally, which is good.
Erich_Grunewald @ 2022-08-30T19:50 (+13)
Another downside is that it eats up quite a lot of time. E.g. if we take the Cause Exploration Prize and assume:
- there are 143 entries (the tag shows 144 posts with that tag on the Forum, one of which introduces the prize)
- an average entry takes ~27h to research and write up (90% CI 15-50h)
- an average entry takes ~1.4h to judge (90% CI 0.5-4h, but maybe I'm wildly underestimating this?)
then we get ~2 FTE years spent (90% CI 1.2-3.6 years). That's quite a lot of labour spent by engaged and talented EAs (and ppl adjacent to EA)!
(Caveats: Those assumptions are only off-the-cuff guesses. It's not clear to me what the counterfactual is, but presumably some of these hours wouldn't have been spent doing productive-for-EA work. Also, I'm not sure whether, had you hired a person to think of new cause areas for 2 years, they would've done as well, and at any rate it would've taken them 2 years!)
Edit: To be clear, I'm not saying the Prize isn't worth it. I just wanted to point out a cost that may to some degree be hidden when the org that runs a contest isn't the one doing most of the labour.
DonyChristie @ 2022-08-30T21:00 (+22)
2 FTEs doesn't seem that bad to me for something as important as cause exploration and given how big the movement is? This just seems fine to me?
Guy Raveh @ 2022-08-30T21:57 (+9)
It's a 2 year full time equivalent, but I think in these cases you get most of the value from it being done by so many different people rather than one person over two years. This gives you not only the advantage of parallelization, but also that of having a diversity of perspectives, which is good for being more thorough in digging into different causes.
Secondly, I don't know how many people actually get a prize, but I think tons of these potential cause area writeups will be valuable in the future as the movement grows, regardless of whether OpenPhil decides to use them at this moment.
Jakob @ 2022-08-30T20:04 (+8)
I think the “get lots of input in a short time from a crowd with different semi-informed opinions” feature of prizes are hard to replace through other mechanisms. Some companies have built up extensive expert networks that they can call on-demand to do this, but that still doesn’t have quite the same agility. However, in those cases you may often want to compensate more than just the best entry (in line with the OP)
Amber Dawn @ 2022-08-30T19:32 (+7)
Thanks for this! Anecdotally, there are several prizes where I've fleetingly thought 'hmm, maybe I'll try and write something for that...', but then....not. In my case this is a combination between not being (able to be?) risk-neutral about personal finance and maybe feeling obligations to others more keenly than the pull to do something cool/fun. This might me more of a problem with me than with prizes, but I just wanted to add myself to the anecdotal pile of people who (it seems) don't seem to 'respond well' to prizes as an incentive.
Kirsten @ 2022-08-30T19:57 (+10)
For me I think the potential cost to my reputation and self-esteem of publishing something really poor outweigh the potential benefit of winning even with a half-assed attempt
(Extra evidence for this: When contests are private and most people won't see my entry, I am more likely to submit! I've entered one contest without people seeing my work, and zero(?) where they do.)
Jakob @ 2022-08-30T20:06 (+10)
The flip side of this is that people with less existing “reputation stock” may see the potential status upside as the main compensation from a prize contest, and not the monetary benefit
Jakob @ 2022-08-30T19:20 (+7)
One interesting debate would be: what’s the optimal % of funding that should go to prizes? Which parameters would allow us to determine this? One can imagine that the % should be higher in communities that are struggling more to hire enough, or where research agendas are unclear so more coordination is needed, but should be lower in communities with people with low savings, or where the funders have capacity to diversify risks.
One additional consideration is that the coordination benefits from prizes (in raising the salience of memes or the status of the winners) comes at an attention cost, so a large number of prizes may cannibalize on our “common knowledge budget” (if there is a limit to how much common knowledge we can generate)
Nathan Young @ 2022-08-30T19:35 (+6)
Great post. I wish there was a way other people could contribute to it and flesh it out.
david_reinstein @ 2022-08-31T01:06 (+2)
Add suggestions using hypothes.is and Peter could fold in the ones he likes?
Vasco Grilo @ 2022-09-03T16:08 (+5)
Let me know what you think about this format and I may do it more.
I like this format, especially if the counterfactual is no post!
Peter Wildeford @ 2022-09-03T17:51 (+3)
Thanks for the feedback!
jooke @ 2022-08-31T06:34 (+4)
I like this style of post with high density of information. I'm not sure a longer post would add additional value, especially after taking into account the additional time to write/read.
Emrik @ 2022-08-31T03:30 (+2)
I would add that I'm in favour of contests with as vague criteria as possible. I'd rather have a contest that says "$X for the best forum post!" than one that says "$X for the best forum post that fits criteria ABC". I think generally you want to incentivise people to work on and write down what they think are their most interesting ideas, as judged by themselves. Model them as having unexpectedly valuable insights in their brains, and you're using prizes to extract it.
But when you incentivise them to compromise what they are optimising for in order to fit your criteria, the results are likely to disappoint.[1]
- ^
I think under some conditions, on a toy model 🦋, the extent to which the results disappoint you will be exponentially proportional to the extent you make them conjunctively-compromise what they optimise for.