Results from the First Decade Review

By Lizka @ 2022-05-13T15:01 (+163)

Tl;dr: 

  1. The Decade Review happened: EA Forum users voted on which posts from the first decade of effective altruism they found most useful and important and reviewed the posts to explain what they appreciated.
  2. We’re awarding $12,500 in prizes to authors and reviewers.[1]
  3. The distribution of prizes to authors was based on your votes, the results of which are linked.
  4. I’m using this post to highlight some aspects of the winning posts and reviews. I encourage more discussion in the comments (and in new posts!).

Prizes for posts

We’re awarding a total of $10,000 to the top posts (and $2,500 for the top reviews). Note that some authors chose to donate their prizes. 

Summary

We’re awarding $10,000 to authors of the posts, broken down as follows.

  1. $1,500 to Hauke Hillebrandt and John G. Halstead ($750 each) for Growth and the case against randomista development
  2. $1000 each to
    1. Helen for Effective Altruism is a Question (not an ideology)
    2. Jai for 500 Million, But Not A Single One More
    3. Greg Lewis for Beware surprising and suspicious convergence
    4. Nate Soares for On Caring
    5. Luisa Rodriguez for two posts(!) : What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? and How bad would nuclear winter caused by a US-Russia nuclear exchange be?
  3. $500 each to
    1. David Althaus and Tobias Baumann ($250 each) for Reducing long-term risks from malevolent actors
    2. Will MacAskill for Are we living at the most influential time in history?
    3. Holly Elmore for We are in triage every second of every day
    4. “EA applicant” for After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
    5. Brian Tomasik for Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness
    6. Julia Wise for You have more than one goal, and that's fine
    7. Peter Singer for Peter Singer – Famine, Affluence, and Morality

First prize ($1500): Growth and the case against randomista development by Hauke Hillebrandt and John Halstead

Randomista development is “an approach to development economics [that focuses on] interventions which can be tested by randomised controlled trials (RCTs).” This is popular in effective altruism, but, as this post argues, might systematically undervalue or miss the interventions that have been responsible for some of the biggest improvements ever in quality of life. In particular, the authors discuss the importance of economic growth for human welfare and argue that the welfare gains from increasing GDP per capita in a country are so large that they outweigh the benefits of randomista development programs by orders of magnitude. As a result, they argue, researchers and grantmakers should spend much more time looking for interventions that can promote growth. The post also discusses objections and explores the limitations of GDP/growth, but emphasizes that this topic is under-prioritized by people in effective altruism. 

I like a lot of things about this post. I had to print the post out and read it carefully (pen, highlighters, and all), and don’t claim expertise in the subjects involved, but I understand enough to see that it’s making some important and true points (although you should also read the comments on the post for a discussion on potential weaknesses and more disputable claims). 

I think more people should explore the value and tractability of economic growth, and more people might want to read this post as an introduction to that subject. Beyond that, it’s also a good example to follow if you’re considering writing or researching something. So I’m glad this post has won the first prize. 

Or, as Maxime CdS put it

Other things I like about this post (these might be some of the reasons people voted for it):

  1. It is well researched.
  2. It has a very specific summary (which lists the main arguments made and provides a rough outline of the post), and the section headers make it easy for readers to find what they are looking for.
  3. Moreover, the best criticisms target deep and ubiquitous beliefs, and this post is a great example of this principle.

Second prizes ($1000 each)

Effective Altruism is a Question (not an ideology) by Helen

Feminism, secularism, and many other movements and worldviews answer questions like: “Should men and women be equal? (Yes.) What role should the church play in governance? (None.)” So it’s natural to ask, “what claims does ‘effective altruism’ make”? This post argues that, unlike those movements, effective altruism asks a question: “How can I do the most good, with the resources available to me?” 

I view the statement “effective altruism is a question” as aspirational: it is a motto we should use to keep ourselves on track. As Helen points out, a key conclusion of this mindset is that “our suggested actions and causes are best guesses, not core ideas.”[2]

This post, and the question behind it, has become a central part of my attitude towards EA — and judging by the fact that it won second place in this review, I’m not unique in this respect.

A review for this post also won a prize.

500 Million, But Not A Single One More by Jai

In 2018, Jai wrote, according to one of the reviewers, “one of the best pieces of EA creative writing of all time” — a retelling of the story of smallpox eradication. 

I don’t have much more to say, except perhaps that I agree with another reviewer that I’d like to see more work along these lines. A short excerpt:

We will never know their names.

The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.

Go read it.

Beware surprising and suspicious convergence by Gregory Lewis

This post added a tool to my cognitive toolkit. It points out that if you discover that two different beliefs, crucial considerations, or philosophies give the same outcome, you might want to be suspicious that you haven’t taken something to its real logical conclusion. 

Or, to use an example from the post:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

Beware of this phenomenon. 

Greg has other posts that I think enrich our cognitive toolkits:

On Caring by Nate Soares

Nate Soares explores scope insensitivity in this essay, where he discusses the fact that he’s “not very good at feeling the size of large numbers,” and that this gets dangerous when we use our feelings as the key factors for our altruistic choices:

My internal care-o-meter was calibrated to deal with about 150 people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high. [...] Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.

The post also discusses the difference between how much something is worth (it’s worth at least 3 minutes of my time to save the life of a bird affected by an oil spill, and it’s worth months to save the other thousands of birds) and what we should actually do when we can’t possibly do enough (I should probably not spend months cleaning birds, even if I want to). 
 

I also recommend his series on “Replacing Guilt.”

Two posts on war, nuclear winter, and the likelihood of recovery from civilizational collapse by Luisa Rodriguez

Only one author had two posts make it into the list of 14 winning posts. 

What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?

Luisa Rodriguez builds models for how different civilizational catastrophes might lead to extinction and analyzes the likelihood of human extinction in different scenarios by looking into factors like how long certain supplies might last. 

It’s good research, and it’s also good communication about research. For instance, Rodriguez announces the approximate time spent on different sections, helping readers to understand how much to trust these numbers. The post is a great demonstration of epistemic legibility; it explains precisely and in detail what points led to which beliefs. This allowed commenters to point out errors in the original post and collectively come to truer conclusions. 

Rodriguez also shares many suggestions for further work on this topic, such as hosting wargames and further building out her models. 

How bad would nuclear winter caused by a US-Russia nuclear exchange be?

This post explores the likely effects of a US-Russia nuclear exchange. Here’s an excerpt: “By my estimation, a nuclear exchange between the US and Russia would lead to a famine that would kill 5.5 billion people in expectation (90% confidence interval: 2.7 billion to 7.5 billion people).”

Kit Harris (who recently helped launch a nuclear security grantmaking program at Longview) writes, in a review that also won a prize:

This was the single most valuable piece on the Forum to me personally. It provides the only end-to-end model of risks from nuclear winter that I've seen and gave me an understanding of key mechanisms of risks from nuclear weapons. I endorse it as the best starting point I know of for thinking seriously about such mechanisms. I wrote what impressed me most here and my main criticism of the original model here (taken into account in the current version).

This piece is part of a series. I found most articles in the series highly informative, but this particular piece did the most excellent job of improving my understanding of risks from nuclear weapons.

Details that I didn’t cover elsewhere, based on recommended topics for reviewers:

How did this post affect you, your thinking, and your actions?

Does it make accurate claims? Does it carve reality at the joints? How do you know?

If you want to produce useful research, this is a good example to learn from.

 

Third prizes ($500 each)

Prizes for reviews

None of this would be possible without reviewers. We’re awarding…

$250 each to

$100 each to

  1. Nuño Sempere’s review of “SHOW: A framework for shaping your talent for direct work”
    1. “This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it's a great and clear way to think about the problem. As an example, this model of patient spending was the result of me pushing the "get humble" button for a while. This post also stands out to me in that I've come back to it again and again.”
  2. Adam Gleave’s review of “2017 Donor Lottery Report”
    1. A discussion of the impact of this post, and some suggestions: “The post had less direct impact than I hoped, e.g. I haven't seen much analysis following on from it or heard of any major donations influenced by it. Although I've not tried very hard to track this, so I may have missed it. However, it did have a pretty big indirect impact, of making me more interested in grantmaking and likely helping me get a position on the long-term future fund. Notably you can write posts about what orgs are good to donate to even if you don't have $100k to donate... so I'd encourage people to do this if they have an interest in grantmaking, or scrutinize how good the grants made by existing grantmakers are.”
  3. Jackson Wagner’s review of “The Narrowing Circle”
    1. A highlight: “Here are some other pieces that seem relevant to the thread of ‘investigating what drives moral change’:
      1. AppliedDivinityStudies arguing that moral philosophy is not what actually drives moral progress.
      2. A lot of Slate Star Codex / Astral Codex Ten is about understanding cultural changes.  Here for instance is a dialogue about shifting moral foundations, expanding circles, and what that might tell us about how things will continue to shift in the future.”
  4. Seanrson’s review of “Longtermism and animal advocacy”
    1. “Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.”
  5. Evelyn Ciara’s review of “Doing good while clueless”
    1. “I'm heartened to have seen progress in the areas identified in this post. For example, the Effective Institutions Project was created in 2020 to work systematically on [improving institutional decision-making]. Also, I've seen posts calling attention to the inadequacy of existing cause prioritization research.
    2. Going forward, I'd like to see more systematic attempts at cause prioritization from a longtermist perspective”

Voting results

You can see them here. This describes how voting worked.

Next steps

That’s not all, folks:

Many other awesome posts were nominated, but didn’t get reviewed and didn’t make it to the final stage

See the nominated posts.

We’ll organize some of this content into sequences and collections

And we’ll keep you updated on the Forum.

Thanks to everyone involved, and please continue writing and commenting! 

Huge appreciation to all the authors, editors, commenters, reviewers, voters, readers, etc. 

  1. ^

     We ended up not recruiting judges for the selection of reviews that won prizes, mostly because there just weren’t that many reviews, and we were very time-constrained. 
    Also, this wrap-up is being posted quite late. Sorry abou that!

  2. ^

    While I really value the state of mind the post gestures to, I disagree with a more literal interpretation of the message. When describing EA to people who don’t know much about it, I think it’s somewhat misleading to insist that “EA is just a question:, ‘How can I do the most good?’” This is almost a motte-and-bailey that can make it harder to criticize the worldview. In practice, the EA community has focus areas, preferred approaches, jargon, and the rest.

  3. ^

    A Forum Prize announcement explained: “This post discusses the fact that we ought to pay more attention when we find ourselves working with whatever data we can scrounge from data-poor environments, and consider other ways of developing our judgments and predictions.”

  4. ^

    (My manager.)


Max_Daniel @ 2022-05-13T18:15 (+52)

The submission in last place looks quite promising to me actually. 

Does anyone know whether Peter Singer is a pseudonym or the author's real name, and whether they're involved in EA already? Maybe we can get them to sign up for an EA Intro Fellowship or send them a free copy of an EA book – perhaps TLYCS?

Zach Stein-Perlman @ 2022-05-13T18:30 (+27)

Peter Singer is originally a character in Scott Alexander's "Unsong," mentioned here (mild spoilers), so it's a pseudonym that's a reference for a certain ingroup.

SiebeRozendal @ 2022-05-14T19:19 (+3)

Maybe we should send a book to all singers named Peter?

https://www.gemtracks.com/guides/view.php?title=most-famous-singers-celebrities-named-peter&id=4861

Max_Daniel @ 2022-05-15T01:45 (+3)

I'm not sure. – Peter Gabriel, for instance, seems to be an adherent of shorthairism, which I'm skeptical of.

Zach Stein-Perlman @ 2022-05-15T02:55 (+4)

You might not feel an instinctive affinity for shorthairists, but try to expand your moral circle!

Ben Pace @ 2022-05-14T17:06 (+7)

Feedback: I tried and failed on my phone to read the voting results by the ranking of how people voted. I don’t know what weighting is used in the spreadsheet so the ordering feels monkeyed-with.

Charles He @ 2022-05-14T22:03 (+2)

Can you write a bit more about what you mean? What voting results? Why would it be obvious that you could back this out?

I don’t remember the details but I remember thinking the quadratic voting formula seemed sort of “underdetermined” and left room for “post processing”, but I read this as the “designer” wasn’t confident and leaving room to get well behaved results (as opposed schemes of outright manipulation).

Charles He @ 2022-05-14T22:11 (+2)

Uh, I spent 45 seconds looking at this, but it looks like the final determinative score was created by doubling the>1000 karma weighted votes score and adding it to the <1000 karma weighted vote score.

The above thought might be noise and not what you’re talking about (but this is because the voting formula is admittedly convoluted and not super clearly documented, it reads like quadratic voting passed through a few different hands without a clear owner).

Ben Pace @ 2022-05-15T00:41 (+7)

Took me a while to find where you got your 2x+y from, I see it's visible if you highlight the cells in the sheet.

Here's a sheet with the score as sorted by the top 1k people, which is what I was interested in seeing: https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing

Raemon @ 2022-05-14T18:05 (+5)

I'd fine it helpful with the spreadsheet to also have people's usernames listed beside the post.

QixiSail @ 2022-06-02T06:00 (+1)

Reading the title of this post I thought it was a decade review of the effective altruism movement. Are any of the EA orgs working on that?