My evaluations of different domains of Effective Altruism
By richard_ngo @ 2021-01-15T23:15 (+29)
This post is a follow-up to my earlier post, Clarifying the Core of Effective Altruism; here I give my own views on how we should evaluate the different versions of EA that I outlined there. You should read that one first.
I personally think that EA as social science is vulnerable to the criticism that systemic change is more important even though it often can’t be studied in rigorous quantitative ways. In particular, even when the effects of systemic change can be studied in these ways (e.g. the effects of prison reform), the methods often can’t (e.g. because the political climate is constantly changing). And although I agree that it’s valuable to “have recommendations that others can fairly easily be confident in” (a quote from GiveWell’s website), I think most people are happy to believe arguments that lack academic standards of rigour, so I don’t see why EA should encourage them to tighten their standards. In fact, GiveWell seems to apply not just academic standards, but standards that are significantly higher than those of most developmental economists (who often recommend policies that aren’t backed by RCTs). OpenPhil's conclusion that "GiveWell's top charities are increasingly hard to beat" is significant evidence in favour of EA as social science, though (although it's hard to know how much to trust the back-of-the-envelope calculations they mention).
I think that, these days, it would be misleading to explain EA without touching on hits-based altruism (or an equivalent concept). Although the types of reasoning involved are much less rigorous than EA as social science, I think that when reasoning about heavy-tailed outcomes, even rough reasoning in the right direction (e.g. answering questions like “how many people could this plausibly affect?” and “how much could it plausibly affect them?”) can get you a long way. There’s also a strong argument that altruists should be unusually willing to embrace risk - because benefiting more people doesn’t have diminishing marginal returns in the same way as benefiting yourself more. However, note that quick feedback loops are vital to entrepreneurs and other pioneers, whereas altruists often don’t have access to them, which may reduce our confidence in EA as hits-based altruism.
I think that EA as trajectory change is promising, but I can understand why others don’t. Although each of the previous examples of trajectory changes I gave started with only a handful of outstanding and committed people, those people were often the best of the best, and even then perhaps still only remembered because of selection effects. Is it really reasonable to try to emulate them? Furthermore, we have many examples of well-intentioned revolutions to remind us that big changes can have unintended negative consequences.
I think there are four key reasons to be optimistic about EA as trajectory change. Firstly, some individual ways to change humanity’s trajectory seem compelling on an object level even given how highly uncertain we should be about this domain in general. The four issues that I’d put into this category are moral circle expansion, decreasing existential risk via biological or nuclear warfare, and preparation for the development of advanced AI.
Secondly, it doesn’t seem unreasonable to think that we are at a special point in human history - the “hinge of history”. Humanity has only very very recently become an industrial species; our economic and technological growth is vastly faster than ever before. Over the next few centuries we seem likely to become a spacefaring species, and to gain the ability to spread out across the universe. This argument makes it more plausible that it’s unusually easy for people alive now to have a very large influence over the very long-term future; however, it doesn't ameliorate the concern that it'll be hard to influence these pivotal events if they're a few centuries away.
Thirdly, it seems that very few people are actually trying to think about or influence the trajectory of the world over the period of a century or longer. So even though this still seems ridiculously hard, there may be some relatively low-hanging fruit for people who really care about doing so.
Fourthly, it may not be necessary to actually start a change as big as the ones outlined previously. It may instead be possible merely to influence an ongoing change to have more long-term value. For example, climate change has made talk about “the rights of future generations” much more common - it seems like this could be a valuable mechanism for raising more general longtermist concerns.
(Note that I’m not claiming that the examples of trajectory change I gave in the previous post have predictably shifted humanity’s trajectory over thousands or tens of thousands of years. This seems like a claim that’s pretty hard to make. Some people pursue a more strongly longtermist version of “EA as trajectory change” which relies on such claims, but I think it’s pretty reasonable to put effects on the scale of centuries in the same category.)
However, we really shouldn’t underestimate how difficult EA as trajectory change is. I think we’ll need to keep improving significantly as a movement in order to actually have a big impact of this type - in particular by doing more novel thinking, and more work exploring the assumptions underlying our beliefs. EA as trajectory change might require founding a major new academic field, for example. I explore some ways EA can improve in this post.