Three intuitions about EA: responsibility, scale, self-improvement
By richard_ngo @ 2022-04-15T07:55 (+196)
This is a post about three intuitions for how to think about the effective altruism community.
tl;dr:
- In a global sense, there are no “adults in the room,” but EA is starting to change that.
- It's easier to achieve big changes by thinking about the overall magnitude of impact rather than marginal efficiency.
- EA culture should prioritize self-improvement and skill-building, e.g. by replacing some local reading groups with working groups.
Part 1: responsibility
The first intuition is that, in a global sense, there are no “adults in the room”. Before covid I harboured a hope that despite the incessant political squabbling we see worldwide, in the face of a major crisis with global implications there were serious people who would come out of the woodwork to ensure that it went well. There weren’t. And that’s not just a national phenomenon, that’s a global phenomenon. Even countries like New Zealand, which handled covid incredibly well, weren’t taking responsibility in the global way I’m thinking about - they looked after their own citizens, but didn’t try to speed up vaccine distribution overall (e.g. by allowing human challenge trials), or fix everyone else’s misunderstandings.
Others developed the same “no adults in the room” intuition by observing failures on different issues. For some, AI risk; for others, climate change; for others, policies like immigration or housing reform. I don’t think covid is a bigger failure than any of these, but I think it comes much closer to creating common knowledge that the systems we have in place aren’t capable of steering through global crises. This naturally points us towards a long-term goal for the EA community: to become the adults in the room, the people who are responsible enough and capable enough to steer humanity towards good outcomes.
By this I mean something different from just “being in charge” or “having a lot of power”. There are many large power structures, containing many competent people, which try to keep the world on track in a range of ways. What those power structures don’t have is the ability to absorb novel ideas and take novel actions in response. In other words, the wider world solves large problems via OODA loops that take decades. In the case of climate change, decades of advocacy led to public awareness which led to large-scale policies, plus significant reallocation of talent. I think this will be enough to avoid catastrophic outcomes, but that’s more from luck than skill. In the case of covid, the OODA loop on substantially changing vaccine regulations was far too long to make a difference (although maybe it’ll make a difference to the next pandemic).
The rest of the world has long OODA loops because people on the inside of power structures don’t have strong incentives to fix problems; and because people on the outside can’t mobilise people, ideas and money quickly. But EA can. I don’t think there’s any other group in the world which can allocate as much talent as quickly as EA has; I don’t think there’s any other group which can identify and propagate important new ideas as quickly as EA can; and there are few groups which can mobilise as much money as flexibly.
Having said all that, I don’t think we’re currently the adults in the room, or else we would have made much more of a difference during covid. While it’s not itself a central EA concern, it’s closely related to one of our central concerns, and would have been worth addressing for reputational reasons alone. But I do think we were closer to being the adults in the room than almost any other group - particularly in terms of long-term warnings about pandemics, short-term warnings about covid in particular, and converging quickly towards accurate beliefs. We should reflect on what would have been needed for us to convert those advantages into much more concrete impact.
I want to emphasise, though, that being the adults in the room doesn’t require each individual to take on a feeling of responsibility towards the world. Perhaps a better way to think about it: every individual EA should take responsibility for the EA community functioning well, and the EA community should take responsibility for the world functioning well. (I’ve written a little about the first part of that claim in point four of this post.)
Part 2: scale, not marginalism
Historically, EA has thought primarily about the marginalist question of how to do the most good per unit of resources. An alternative, which is particularly natural in light of part 1, is to simply ask: how can we do the most good overall? In some sense these are tautologically equivalent, given finite resources. But a marginalist mindset makes it harder to be very ambitious - it cuts against thinking at scale. For the most exciting projects, the question is not “how effectively are we using our resources”, but rather “can we make it work at all?” - where if it does work it’ll be a huge return on any realistic amount of investment we might muster. This is basically the startup investor mindset; and the mindset that focuses on megaprojects.
Marginalism has historically focused on evaluating possible projects to find the best one. Being scale-focused should nudge us towards focusing more on generating possible projects. On a scale-focused view, the hardest part is finding any lever which will have a big impact on the world. Think of a scientist noticing an anomaly which doesn’t fit into their existing theories. If they tried to evaluate whether the effects of understanding the anomaly will be good or bad, they’d find it very difficult to make progress, and maybe stop looking. But if they approach it in a curious way, they’re much more likely to discover levers on the world which nobody else knows about; and then this allows them to figure out what to do.
There are downsides of scaling, though. Right now, EA has short OODA loops because we have a very high concentration of talent, a very high-trust environment, and a small enough community that coordination costs are low. As we try to do more large-scale things, these advantages will slowly diminish; how can we maintain short OODA loops regardless? I’m very uncertain; this is something we should think more about. (One wild guess: we might be the one group best-placed to leverage AI to solve internal coordination problems.)
Part 3: self-improvement and growth mindset
In order to do these ambitious things, we need great people. Broadly speaking, there are two ways to get great people: recruit them, or create them. The tradeoff between these two can be difficult - focusing too much on the former can create a culture of competition and insecurity; focusing too much on the latter can be inefficient and soak up a lot of effort.
In the short term, it seems like there are still low-hanging fruit when it comes to recruitment. But in the longer term, my guess is that EA will need to focus on teaching the skillsets we’re looking for - especially when recruiting high school students or early undergrads. Fortunately, I think there’s a lot of room to do better than existing education pipelines. Part of that involves designing specific programs (like MLAB or AGI safety fundamentals), but probably the more important part involves the culture of EA prioritising learning and growth.
One model for how to do this is the entrepreneurship community. That’s another place where returns are very heavy-tailed, and people are trying to pick extreme winners - and yet it’s surprisingly non-judgemental. The implicit message I get from them is that anyone can be a great entrepreneur, if they try hard enough. That creates a virtuous cycle, because it’s not just a good way to push people to upskill - it also creates the sort of community that attracts ambitious and growth-minded people. I do think learning to be a highly impactful EA is harder in some ways than learning to be a great entrepreneur - we don’t get feedback on how we’re doing at anywhere near the same rate entrepreneurs do, so the strategy of trying fast and failing fast is much less helpful. But there are plenty of other ways to gain skills, especially if you’re in a community which gives you support and motivation to continually improve. (One concrete suggestion: instead of EA student groups running many reading groups, I'd be keen to see them running more working groups - e.g. doing practical ML projects, or writing reports on relevant topics. Most of the EA students I know would be capable of doing novel work along these lines, especially if they worked together.)
Lauren Reid @ 2022-04-16T13:18 (+11)
Thanks for your post. My biggest surprise from the pandemic was the failure of the institutions I had always trusted, which was deeply disappointing. I realized, we were more competent than the decision makers and took on additional responsibility. My husband, Alex D, is an epidemiologist in risk assessment/early warning. In early 2020 he was trying to warn our (Can) government about Covid and did ‘crazy’ things like refused to shake people’s hands, and brought in a basket on handmade masks to work at the emergency operations centre. He’s now had 4 promotions since 2020 and is in private sector, so at least people recognized an excellent decision maker after the fact. As a physician, I took on more responsibility at the hospital and try to overturn rules such as that we’re not allowed to open the windows (for better ventilation), I told my staff to wear masks and did myself, when it ‘wasn’t necessary’. We were relatively well positioned to make changes, as mid career professionals in institutions and it was still frustratingly non-responsive. Ideally, we’d have a relay team of rational ‘adults’ in a position to effect change, with even more power than we had, each taking turns sprinting. In my experience, some competent people have risen up through the ranks, while others have burned out - turns out, adulting is exhausting. I agree, after living these institutional failures, the EA community gives me hope.
Mauricio @ 2022-04-15T09:41 (+9)
Thanks for this! It feels like this post also reflects an important meta-level intuition (one that also happens to be an application of your point about scale). This is the intuition that, although movement-building conversations often focus on tactics and marginal improvements, we can think ambitiously about high-level goals for the community.
Holly Morgan @ 2022-04-29T11:14 (+8)
I loved this post but ignored it the first time I saw it because I had a poor sense of what it would be about. But the title does act as a nice summary after someone's read the post if they're trying to find it again. Have you considered adding a tl;dr? E.g.
- In a global sense, there are no “adults in the room,” but EA is starting to change that
- It's easier to achieve big change with a startup investor mindset than a marginalist mindset
- EA should prioritise personal growth e.g. replace some local reading groups with working groups
richard_ngo @ 2022-05-13T01:23 (+2)
Great suggestion! I've added this now.
Max Clarke @ 2022-04-22T02:02 (+4)
"We might be the one group best-placed to leverage AI to solve internal coordination problems".
I listened to this post via the non-linear library and I claim that's already an example of this.
nmulani @ 2022-04-17T19:59 (+4)
Very thought-provoking, thanks for sharing. I agree with the observation that EA is facing a tricky challenge of maintaining a high-trust, agile approach while attempting to build a broader talent pipeline and expand scope for impact.
While these networks grow, filtering for "great people" feels like a critical element to maintaining EA's distinctive approach. The "growth mindset" matters much more than more superficial characteristics. I appreciate the call-out to the entrepreneurial community: If many of the most successful entrepreneurs reach a peak in their mid-40s after experiencing multiple failures, how could/should that inform EA's future approaches to engaging and retaining talented people in the community?
Dewi Erwan @ 2022-04-15T11:58 (+4)
Great post, thanks for writing it!
I'm really excited for more people in the EA community to ask questions like "what would the world look like if we've solved X problem? How can we make that world a reality? What team do we need to build to achieve this goal over a decade-long time horizon?" as opposed to focusing predominantly on what's best to do given a certain set of resources or capabilities one currently has, doing independent projects, and doing projects for short periods of time.