EA Forum Prize: Winners for March 2020
By Aaron Gertler 🔸 @ 2020-05-13T10:09 (+26)
CEA is pleased to announce the winners of the March 2020 EA Forum Prize!
In first place (for a prize of $750): “Effective altruism and free riding,” by sbehmer.
In second place (for a prize of $500): “The case for building more and better epistemic institutions in the effective altruism community,” by Stefan Torges.
In third place (for a prize of $250): “Effective Animal Advocacy Nonprofit Roles Spot-Check,” by Jamie Harris.
The following users were each awarded a Comment Prize ($50):
- Arden Kohler and Richard Ngo on key ongoing debates in EA
- smclare and Derek for detailed feedback on charities’ impact estimates
- jackva on the drawbacks of Drawdown
For the previous round of prizes, see this post.
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
Effective altruism and free riding
This post describes issues that could apply to nearly every kind of EA work, with clear negative consequences for everyone involved. I especially liked the problem statement in this passage:
The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities).
The post supports this point with a well-structured argument. Elements I especially liked:
- The use of tables to demonstrate a simple example of the problem
- References to criticism of EA from people outside the movement (showing that “free-riding” isn’t just a potential issue, but may be influencing how people perceive EA right now)
- References to relevant work already happening within the movement (so that readers have a sense for existing work they could support, rather than feeling like they’d have to start from scratch in order to address the problem)
- The author starting their “What should we do about this?” section by noting that they weren’t sure whether “defecting in prisoner’s dilemmas” was actually a bad thing for the EA community to do. It’s really good to distinguish between “behavior that might look bad” and “behavior that is actually so harmful that we should stop it.”
The case for building more and better epistemic institutions in the effective altruism community
Like the prior post, this post contains a well-structured argument for addressing a problem that could be dragging down the overall impact of EA work across many different areas. You could summarize the main point in a way that makes it seem obvious (“EA should try to figure things out in a better way than it does now”), but in doing so, you’d be ignoring the details that make the post great:
- Pointing out examples of things the community has done that pushed EA in the right direction (e.g. influential criticism, expert surveys) in order to show that we could do even more work along the same lines.
- Comparing one reasonable proposal (better institutions) to other reasonable proposals (better norms, other types of institution, focusing on growth over institution-building) without arguing too vociferously in favor of the first proposal. I liked the language “I sketch a few considerations,” where some posts might have used “I show how X is superior to Y and Z.”
If you read this post, I also strongly recommend reading the comments! (This applies to the post above as well.)
Effective Animal Advocacy Nonprofit Roles Spot-Check
Many people have strong opinions on the state of the EA job market, but it can be difficult to find enough data to support any particular viewpoint. I appreciate AAC’s efforts to chase down facts, and to present its methodology and results very clearly. I don’t have much to say about the style or structure of this post; it’s just clear and thorough, and I’d be happy to hear about other researchers using it as a template for presenting their own work.
(One note: I like that the “limitations” section also includes suggestions for further research. Posts that show how others can build on them seem likely to encourage further intellectual progress.)
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by six people:
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
——
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.
MaxRa @ 2020-05-13T19:28 (+11)
Thanks! I think the prices are a great idea and I'm glad there is so much great content that well deserves them.
I noticed that you stopped explaining why the individual people are part of the committee and you added one more person, and I got curious.
Aaron Gertler @ 2020-05-15T00:36 (+5)
Vaidehi recently became a Forum moderator; the prize judges are now two moderators (myself and Vaidehi), two people who had a lot of karma when the Prize started (Rob and Peter), and two people (Larks and Khorton) who were added after writing a lot of good posts/comments on the new Forum (post-November 2018).
brb243 @ 2020-05-13T11:33 (+2)
From what I read regarding the committee's rationale for selecting "Effective altruism and free riding," I infer that good posts:
1) are visually concise (e. g. use tables, highlights, heading structures, infographics)
2) build on/respond to existing EA work when possible
3) recommend actionable items that EAs may follow
4) incorporate external perspectives when possible
Am I right? Should this be formalized and perhaps an example created, in order to facilitate information exchange and to promote meaningful actions?
Aaron Gertler @ 2020-05-15T00:39 (+3)
Some of this is true (using clear structures and visuals where appropriate, incorporating external perspectives where possible). On the other hand, some great content doesn't build on existing work or recommend action.
The Prize exists in large part to "formalize" what good content looks like to a reasonable extent, using actual content; I don't think you could capture everything about a great post with an artificial example or two, but if someone were to skim through a couple of Prize posts, I think they'd get the right idea about what we (the judges) think is valuable.
brb243 @ 2020-05-16T10:22 (+1)
OK, that makes sense, thank you.