EA Forum Prize: Winners for May 2019
By Aaron Gertler 🔸 @ 2019-07-12T01:48 (+25)
Note: This writeup was delayed by EA Global and my taking some vacation days. I’ll publish the winners of the June prize later this month.
CEA is pleased to announce the winners of the May 2019 EA Forum Prize!
In first place (for a prize of $999): “High School EA Outreach”, by cafelow.
In second place (for a prize of $500): “Ingredients for creating disruptive research teams”, by stefan.torges.
In third place (for a prize of $250): “Aligning recommender systems as cause area”, by IvanVendrov and Jeremy_Nixon.
For the previous round of prizes, see our April post.
What is the EA Forum Prize?
Certain posts exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum's users.
About the winning posts
Note: I write this section in first-person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
—
Every day, thousands of people around the world work on building the EA community — whether they’re organizing a conference or just talking to a friend about effective giving.
Organizations like CEA try to collect reports on this work and share overall lessons with the rest of the community, but much of our experience remains locked in the memories of the individuals who ran particular projects.
“High School EA Outreach” is a brilliant attempt to solve this problem in one particular area — as the name implies, outreach to high school students. Catherine Low compiled stories from a dozen contributors into a post that offers valuable lessons to anyone who ever wants to run a high school project, whether it’s a class, a fundraiser, or a distribution of lesson plans.
One especially notable feature of this post: Multiple contributors shared their own separate conclusions, each with slightly different takeaways. This spares any one author the need to create a comprehensive summary and lets readers see the data from multiple perspectives. It’s rare that co-authored pieces actually acknowledge where authors differ in their views, but I wish that it were more common.
—
Stefan Torges’ “Ingredients for creating disruptive research teams” is a thorough, well-done literature review — one that wouldn’t be out of place in an academic journal, save for an additional section on what Torges’ organization (the Effective Altruism Foundation) took away from the research. While I thought the entire review was excellent, the takeaway section was the part which excited me most; it gives readers who work at research-focused organizations a sense for how they might begin to apply the lessons themselves.
Related: In November, Torges won a Forum Prize for “Takeaways from EAF’s Hiring Round,” which also took readers inside of his organization’s operations. It’s rare that EA organizations offer such a close look at their internal processes, but the more often it happens, the easier it becomes for established organizations to learn from each other, and for newly-founded orgs to get off to a strong start.
—
Every cause area starts somewhere. And while I’m not sure whether improving YouTube recommendations or fixing the News Feed will become a major focus of EA research, I commend Ivan Vendrov and Jeremy Nixon for crafting a coherent vision for how we might approach the problem of “aligning recommender systems.”
Alongside a straightforward discussion of the scale of these systems' influence (they shape hours of daily experience for hundreds of millions of people), the authors present a fascinating argument that certain features of these commercial products map onto longstanding problems in AI alignment. This broad scope seems appropriate for an introduction to a new cause — I’m happy to see authors make the most comprehensive case they can, since further research can always moderate their conclusions.
(It helps that Vendrov and Nixon freely admit the low confidence levels around their specific numbers and discuss the risks behind this work — they want to inform, not just persuade.)
Finally, I appreciated the next-to-last section (“Key points of uncertainty”), which leaves a set of open questions for other authors to tackle and creates convenient cruxes for debate.
The voting process
Prizes were chosen by seven people:
- Two Forum moderators (Aaron Gertler and Denise Melchin).
- Three of the highest-karma users at the time the new Forum was launched (Peter Hurford, Joey Savoie, and Rob Wiblin).
- New this month: Two users who have a recent history of strong posts and comments (Larks and Khorton)
All posts published in the month of May qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Linkposts with no additional content
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Winners were chosen by an initial round of approval voting, followed by a runoff vote to resolve ties.
New this month:
- Posts written by judges are now eligible for the Prize (though, as noted above, judges can’t vote for their own posts). After some discussion, we came to the conclusion that volunteering as a judge shouldn’t deprive someone of the opportunity to be rewarded for an especially good post.
- We’ve streamlined the voting process by marking certain categories of post as ineligible for prizes (as noted above). This reduces the number of posts that judges must consider, while maintaining the goals of the Prize.
- While linkposts are valuable, we want to use the Prize to reward authors who create original content (or add context and commentary to content published elsewhere).
- We also want the Prize to reward posts that Forum users generally found valuable. Thus, we exclude posts which have no votes, or which accrued more negative than positive karma after publication. (Beyond this, karma is not considered as a factor in voting, unless an individual judge decides to do so.)
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact Aaron Gertler.
Milan_Griffes @ 2019-07-12T22:42 (+7)
New this month: Two users who have a recent history of strong posts and comments (Larks and Khorton)
Could you say more about the process by which Larks & Khorton were added to the roster of people who have a vote?
(I'm pretty sure I've been commenting & posting at the roughly same cadence as them. No one approached me about this, so I'm curious about the process here.)
aarongertler @ 2019-07-17T01:11 (+13)
After Julia decided to step down, I proposed a list of six Forum users who I thought might be good candidates. She and I discussed the options and decided to begin by reaching out to Larks and Khorton, who both accepted; if they hadn't, I'd have approached other candidates who I believe would also be solid judges.
(There are many more than six contributors who I'd by open to considering; the original shortlist was just six people who quickly came to mind, among whom I expected we'd get at least two "yes" responses.)
I wanted to start with a relatively small addition, but there's a good chance that the roster will expand later on. I can imagine getting up to a group of 8-10 people without the Prize becoming too difficult to coordinate, and I also wouldn't be surprised if people sometimes joined up for a couple of months and then stepped down, based on their available time.
Julia_Wise @ 2019-07-17T02:28 (+18)
I also thought that since both Larks and Khorton have provided useful criticism of CEA's work, and since the panel already has several CEA-affiliated judges, one advantage of the two new judges is that this moves us away from any existing pro-CEA slant. Not that they're the only people in this category, but we thought they were good representatives of people who have expressed fair criticisms of CEA.
MichaelPlant @ 2019-07-13T20:14 (+8)
As an active forum user, I would also be curious to hear about this.
Khorton @ 2019-07-13T23:13 (+7)
I don't know the answer, so I'm also kind of curious!