CEA grew a lot in the past year

By MaxDalton @ 2021-11-05T09:01 (+84)

For CEA's Q3 update, we're sharing multiple posts on different aspects of our work.

Over the past year, we’ve doubled down on the strategy we set out last year. The key metrics we were targeting increased significantly (often more than doubling), and we made many strong hires to nearly double our headcount.

So, unless you’ve been paying a lot of attention, CEA is probably somewhat different from what you think.[1]

Our strategy

We think that humanity will have a better chance of surviving this century, and sharply reducing present suffering, if there are many more highly-engaged EAs (“HEAs”). By this, we mean people who are motivated in part by an impartial care for others[2], who are thinking very carefully about how they can best help others, and who are taking some significant actions to help (most likely through their careers).[3]

In the recent past, people we’d consider “highly engaged” have done a lot to improve human lives, reduce the suffering of animals, develop our understanding of risks from emerging technologies, and build up the effective altruism community.

To increase the number of HEAs working on important problems, we are nurturing discussion spaces: places where people can come together to discuss how to effectively help others, and where they can motivate, support, and coordinate with each other.

In particular, we do this via university groups and conferences, both of which have a strong track record of getting people deeply interested in EA ideas, and then helping them find ways to pursue impactful work (as evidenced, for instance, by OpenPhil’s recent survey).

Recent progress

Some recent progress:

We think this type of progress is critical, because it means that more people are being exposed to and then engaging deeply with the ideas of effective altruism. We are in the process of assessing how well this progress has translated into more people taking action to help others in the last year, but given previous data, we expect to see a strong connection between these figures and the number of people who proceed to work on important problems.

As for CEA’s internal progress:

Mistakes and reflections

I think that the key specific mistakes we made during this period were:

I also plan to spend part of the next few months reflecting on questions like:

If you are interested in helping us, let me know: finding the right people to hire will help us move forward on many of these improvements, and we’re always keen to diversify our funding base.


  1. This probably applies to most organizations you’re not tracking closely: but I think the scale of change is maybe greater with CEA. ↩︎

  2. Without regard to factors like someone’s nationality, birthdate or species, except insofar as those things might actually be morally relevant. ↩︎

  3. For each of these attributes, we set quite a high bar. And when we evaluate whether we’d think of someone as “highly engaged”, we either interview them or look for other strong evidence (such as their having been hired by an organization with high standards and a strong connection to the EA movement). ↩︎


HaydnBelfield @ 2021-11-06T12:44 (+33)

Congratulations on this growth, really exciting!

Have you thought about including randomisation to facilitate evaluation?

E.g. you could include some randomisation in who invited to events (of those who applied), which universities/cities get organisers (of those on the shortlist) etc. This could also be done with 80k coaching calls, dunno if it has been tried.

You then track who did and didn't get the treatment, to see what effect it had.  This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.

This would allow some causal inference (RCT/Randomista,  does x cause y, etc) as to what effect these treatments are having (vs the control, and null hypothesis of no effect). This could help justify impact to the community and funders. I'm sure people at eg JPAL, Rethink, etc could help with research design.

Pablo @ 2021-11-06T19:36 (+20)

I support this idea and have mentioned it previously (e.g. here and here).

This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.

I'm not sure I understand your proposal correctly. To take a concrete example, say 80k gets 500 coaching requests per year and they only have the capacity to coach 250 people. Presumably they select the 250 people they think are most promising, whereas a randomized study would select 250 people randomly and use the remaining 250 as a control. In a sense, this does not involve denying treatment to anyone, since the same number of people (though not the same people) receive coaching, but it does involve a cost in expected impact, which is what matters in this case (and presumably in most other relevant cases—it would be surprising if EA orgs were not prioritizing when they are unable to allocate a resource or service to everyone who requests it). I think the cost is almost certainly justified, given that no randomized studies have been conducted so far and the existing methods of evaluation are often highly speculative, but this doesn't mean that there are no costs. But as noted, I may be misunderstanding you.

If one is still concerned about the costs, or if randomization is infeasible for other reasons, an alternative is to use a quasi-experimental approach such as a regression discontinuity design. Another alternative is to have a series of Metaculus questions on what the results of the experiment would be if it was conducted, which can be informative even if no experiment is ever conducted.

ShayBenMoshe @ 2021-11-07T13:47 (+16)

I just want to add, on top of Haydn's comment to your comment, that:

  1. You don't need the treatment and the control group to be of the same size, so you could, for instance, randomize among the top 300 candidates.

  2. In my experience, when there isn't a clear metric for ordering, it is extremely hard to make clear judgements. Therefore, I think that in practice, it is very likely that let's say places 100-200 in their ranking seem very similar.

I think that these two factors, combined with Haydn's suggestion to take the top candidates and exclude them from the study, make it very reasonable, and of very low cost.

Nathan Young @ 2021-11-05T16:20 (+21)

What's the argument against CEA being 10x it's current size. IE why is this the right size to stick at?

Is there research on what the value of HEAs are and why the current amount of money is the right amount to spend finding them?

MaxDalton @ 2021-11-05T16:49 (+17)

I think you're assuming that we're planning to stick at this size! I think we'll  continue to grow at least somewhat beyond this scale, but I'm not yet confident that 10x would still be cost-effective (in terms of aligned labour).

There is some research on the value of HEAs, but unfortunately it's not mine so I can't share it. Right now, I'm not particularly concerned that the financial costs of CEA aren't repaid via the number of HEAs we help find. I think that the main thing stopping us from creating more HEAs is probably not funding: it's talent and the ability to coordinate that talent without things breaking as we grow. (Funding is helpful for us to diversify our funding base and be more stable.)

Nathan Young @ 2021-11-05T19:18 (+2)

It feels like CEA has been avoiding growing previously and now has started. Am I wrong about that? If not, what changed?

MaxDalton @ 2021-11-05T19:27 (+3)

You're right that growth was flatter in previous years (though a lot of metrics -e.g. Forum metrics -  grew a lot in 2020 too).

On an organizational level, we consolidated in 2019, figured out our strategy and narrowed our scope in 2020. At the beginning of 2021 we had a clear strategy and we got more data on our impact from OP's survey. That made me confident that we should switch into expansion mode (in terms of headcount).

More strategically, I think the community is now better set up to accommodate growth - e.g. many more of the core ideas are written up and shared widely, and there are more orgs doing a lot of hiring. So I think we can grow the number of people in the community somewhat quicker at a given quality level than we could in 2018. I don't think the community should grow too quickly, but I think we should grow more quickly than we did in the last couple of years.

Nathan Young @ 2021-11-05T20:51 (+7)

So I think the thing I don't understand is why you think we shouldn't grow the community too quickly. Why is this the right level?

And thanks for being so generous with your time here.

MaxDalton @ 2021-11-06T06:49 (+7)

Ah, maybe I was confused because "level" sounded like "total size" to me, whereas I think you mean "why is this rate of growth right?". Is that right?

My current best guess is that we should be targeting roughly 40% growth, which is quite a bit faster than Ben Todd's estimates for previous years. (This is growth of highly-engaged EAs: I think we could grow top of funnel or effective-giving-style brands more quickly.)

The main reason that I think we shouldn't grow too much quicker than this is that I think there are some important things (ways of thinking, norms, some of the fuzzier and cutting edge research areas) that are best transferred via apprenticeships of some sort (e.g. taking on a junior role at an org, getting mentorship, doing a series of internships). If you think it takes a couple of years of apprenticeship before people are ready to train up people, then this puts a bit of an upper limit on growth. And if we grow too much faster than that, I worry that some important norms or ways of thinking (e.g. really questioning your beliefs, reasoning transparency, collaborative discussion norms) don't get passed on, which significantly reduces the value of the community's work.

The main reason that I think, despite that, we should grow at about 40% (which is pretty quick compared to the past) is that if we grow too much slower than this, I just don't see us reaching the sort of scale that we might need to address the problems we're facing (some of which have deadlines, maybe in a decade or two).

Ozzie Gooen @ 2021-11-05T23:08 (+14)

I'm quite happy to see the progress here. Kudos to everyone at CEA to have been able to scale it, without major problems yet (that we know of). I think I've been pretty impressed by the growth of the community; intuitively I haven't noticed a big drop in average quality, which is obviously the thing to worry about with substantial community growth.

As I previously discussed in some related comment threads, CEA (and other EA organizations in general) scaling, seems quite positive to me. I prefer this to trying to get tons of tiny orgs, in large part because I think the latter seems much more difficult to do well. That said, I'm not sure how much CEA should try to scale over the next few years; 2x/year is a whole lot to sustain, and over-growth can of course be a serious issue. Maybe, 30%-60%/year feels safe, especially if many members are siloed into distinct units (like seems to be happening).

Some random things I'm interested in, in the future:

Also, while there's much to like here, I'd flag that the "Mistakes" seem pretty minor? I appreciate the inclusion of the section, but for a team with so many people and so many projects, I would have expected more to go wrong. I'm sure you're excluding a lot of things, but am not sure how much is being left out. I could imagine that maybe something like a rating would be more useful, like, "we rated our project quality 7/10, and an external committee broadly agreed". Or, "3 of our main projects were particularly poor, so we're going to work on improving them next time, but it will take a while."

I've heard before a criticism that "mistakes" pages can make things less transparent (because they give the illusion of transparency), not more, and that argument comes to mind.

I don't mean this as anything particularly negative, just something to consider for next time.

MaxDalton @ 2021-11-06T07:17 (+13)

Thanks! Some comments:

  • Yeah, I agree 2x is quite a lot! We grew more this year because I think we were catching up with demand for our projects. I expect more like 50% in the future.
  • Is there a strong management culture? I think there is: I've managed this set of managers for a long while, and we regularly meet to discuss management conundrums, so I think there's a shared culture. We also have shared values, and team retreats to sync up together. But each manager also has their own take, and I think that is leading to different approaches to e.g. project management or goal setting on each team (but not yet to conflict).
  • Are managers improving? Broadly, I think they still are! For each of them, there's generally some particular area they're focused on improving via feedback or mentorship. But I also think that we're all just getting extra years of management under our belt, and that helps a lot. I think we're still interested in also bringing in people with management experience or aptitude, to help us keep scaling.
  • People who are a good fit for CEA: One thing that I think people haven't fully realized is that we're a remote org first. So if you can't find EA jobs nearby, we might be a good fit. I'm particularly interested in hiring ambitious, agile, user-focused people right now. You can read a lot more on our careers page.
  • I have recently been talking to some people who are interested in setting up new projects that are adjacent to or complementary to our current work, and we're exploring whether some of those could be a part of CEA. So I'm open to that, but the current things are in their early stages. If you are interested in setting up a new thing, and you think it might be better as part of CEA, feel free to get in touch and we can explore that. I think the key reason it might be better at CEA is if it fits in really closely with our current projects, or if there are synergies (e.g. you want to build off Forum tech or do something in the groups space).
  • Re cults/scandals at local groups: I agree that this is a risk. We hope that with more group calls we might catch some of this, but ultimately it's hard to vet all local groups. I'd encourage anyone who has concerns about a group or individual to consider reaching out to Julia Wise.
  • Re mistakes: Those do feel like the biggest ones that directly harmed our outside work. Then I think there were a lot of cases where we could have moved a bit more quickly, or taken on an extra thing that really mattered, or made a slightly better decision. Those really matter too - maybe more than the things that look more like "mistakes" -  but it's often a bit hard to write them up cleanly.  I guess I think that this post overall gives an accurate summary of the balance of successes vs. harm-causing mistakes, but it's not comprehensive about either. And then it might under-weight all of the missed opportunities.  (Our mistakes page has that disclaimer ("not comprehensive") at the top, but I expect people still sometimes see it as comprehensive.)
jared_m @ 2021-11-05T15:44 (+7)

If you are looking to donate to CEA, the Every.org donation matching program still has $60K in matching funds available (for a 1:1 match up to $100 [USD]). 

No time like the present to convert 100 USD for CEA into 200 USD! The link to CEA's giving page on Every.org is here.