Improving the EA-aligned research pipeline: Sequence introduction

By MichaelA🔸 @ 2021-05-11T17:57 (+63)

This post doesn’t necessarily represent the views of my employers.

There are many people who have the skills and desire to do EA-aligned research, or who could develop such skills via some experience, mentorship, or similar.[1]

There are many potentially high-priority open research questions that have been identified.

And there are many funders who would be happy to pay for high-quality research on such questions.

Sounds like everything must be lining up perfectly, right?

In my view,[2] the answer is fairly clearly “No”, and getting closer to a “Yes” could be very valuable. The three ingredients mentioned above do regularly combine to give us new, high-quality research and researchers, but:

In this sequence, I try to:

  1. Provide a clearer description of what I see as the “problem”, its drivers, and its consequences
  2. Outline some goals we might have when designing interventions to improve the EA research pipeline
  3. Overview 18 interventions options that seem worth considering[3]
  4. Describe one of those intervention options in more detail, in hopes that that leads to either a good argument against that option or to someone actually building it.

Target audience

This sequence is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future.

This sequence may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline.

Epistemic status / caveats for the sequence

I’m confident that these posts will usefully advance an important discussion. That said, I expect my description of the “problem” and my list of “goals” could be at least somewhat improved. And it’s possible that some of my ideas for solutions are just bad and/or that I’ve missed some other, much better ideas.

I’ve done ~6 FTE months of academic research (producing one paper) and ~11 FTE months of research at EA orgs. My framings and suggestions are probably somewhat skewed towards:

I've spent roughly 50 hours actually writing, editing, or talking about these posts. Additionally, the topics they address are probably one of the 3-10 things I’ve spent the most time thinking about since early 2020. That said, there are various relevant bodies of evidence and literature that I haven’t dived into, such as metascience.

It also seems worth saying explicitly that:

Related previous work

I am far from the first person to discuss this cluster of topics. The following links may be of interest to readers of this post, and some of them informed my own thinking substantially:

And here are some links that are somewhat relevant, but less so:

I also previously touched on related issues in my post A central directory for open research questions.

Acknowledgements

For comments on earlier drafts of one or more of these posts, I’m grateful to Nora Ammann, Edo Arad, Jungwon Byun, Alexis Carlier, Ryan Gourley, David Janků, Julian Jamison, Peter Hurford, David Moss, David Reinstein, and Linch Zhang. For earlier discussions that did or may have informed these posts, I’m grateful to many of the same people and to Ryan Briggs, Stanislava Fedorova, Ozzie Gooen, Alex Lintz, Amanda Ngo, Jason Schukraft, and Jesse Shulman. In some places, I’m directly drawing on or remixing specific ideas from one or more of these people. That said, these posts do not necessarily represent the views of any of these people.


  1. For example, Rethink Priorities recently received ~665 applications for a summer research internship program, with only ~10 internship slots available. Given the limited slots available, we had to reject at stage 2 many applicants who seemed potentially quite promising, and reject at stage 3 some candidates we were fairly confident we’d have been happy to hire if we had somewhat more funding and management capacity. ↩︎

  2. I think this also matches the views of many other people; see “Related previous work”. ↩︎

  3. Yes, 18. Things got a little out of hand.

    My original draft of this post briefly summarised those intervention options, but some commenters suggested that I refrain from mentioning potential solutions till readers had read and thought more about the problems and goals we’re aiming to solve. See also Hold Off On Proposing Solutions. ↩︎


mnoetel @ 2021-05-12T23:21 (+10)

Great initiative @MichaelA. I'm not sure what a 'sequence' does, but I assume this means there'll be a series of related posts to follow, is that right?

MichaelA @ 2021-05-13T06:53 (+3)

Yeah, I think it's basically EA Forum / LessWrong jargon for "series of posts". 

Sequences are collections of posts on a common theme, or that build on each other. They help authors to develop ideas in ways that would be difficult in a single post. You can also add posts written by other people to a sequence if you think they should be read together. [source]

There are 4 more posts to come in this sequence, plus ~2 somewhat related posts that I'll tack on afterwards, one of which I've already posted: Notes on EA-related research, writing, testing fit, learning, and the Forum

mnoetel @ 2021-05-21T05:45 (+1)

Perfect, thanks!

MichaelA @ 2021-05-11T18:38 (+10)

I’m not fully satisfied with the label I’m currently using for this topic/effort and this sequence. Here are some alternatives that I considered or that other people suggested:

(That's in roughly descending order of how much I like them. And of course I currently prefer the label I'm actually using at the moment.)

BrianTan @ 2021-05-12T01:15 (+6)

I think the current title of the sequence is fine and probably better than the rest of the alternatives you put!

MichaelA @ 2021-06-30T09:23 (+3)

Luke Muehlhauser recently published a new post that's also quite relevant to the topics covered in this sequence: EA needs consultancies

See also his 2019 post Reflections on Our 2018 Generalist Research Analyst Recruiting.

Linch @ 2021-06-13T10:11 (+2)

I briefly discussed this with MichaelA  offline, but I'm interested in which "pipe" in the pipeline this sequence is primarily covering, but also which pipe it should be primarily covering. 

A central example* of the EA-aligned research pipeline might look something like 

get interested in EA-> be a junior EA researcher -> be an intermediate EA researcher -> be a senior EA researcher .

As a junior EA researcher, I've mostly been reading this sequence as mostly thinking of the first pipe in this pipeline.

get interested in EA-> be a junior EA researcher

However I don't have a principled reason to believe that this is the most critical component in the EA research pipeline, and  I can easily think of strong arguments for later stages.  

There's a related question that's pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my "meta work-time" ought to be allocated to "advising/giving mentorship to others" vs "seeking mentorship and other ways to self-improve on research."

*Though not the only possible pipeline, eg instead maybe we can recruit senior researchers directly

MichaelA @ 2021-06-13T11:17 (+2)

There's a related question that's pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my "meta work-time" ought to be allocated to "advising/giving mentorship to others" vs "seeking mentorship and other ways to self-improve on research."

Yeah, I agree that this is an important concrete question, and unfortunately I don't have much in the way of useful general-purpose thoughts on it, except:

  • Mentorship/management is a really important bottleneck in EA research at the moment and seems likely to remain so, so testing or improving fit for that may be more important than one would think by default
  • But presumably sometimes one would sometimes improve as a mentor/manager more by just getting better at their own object-level work rather than trying to work on mentorship/management specifically? 
    • I don't know how often that's the case, but people should consider that hypothesis.
  • People should obviously consider the specifics of their situation, indications of what they're a good fit for, etc.

(It seems possible to work out more specific and detailed advice than that. I'd be keen for someone to do that, or to find and share what's already been worked out. I just haven't done it myself.)

MichaelA @ 2021-06-13T11:09 (+2)

FWIW, I think this sequence is intended to be relevant to many more "pipelines" than just that one (if we make "pipeline" a unit of analysis of the size you suggest), such as:

  • Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby do higher priority research and maybe do it better (since one's worldview etc. could also influence many decisions smaller than what topic/question to focus on)
  • Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby in various ways support more and better research on high priority topics (e.g., by providing mentorship)
  • Getting junior, intermediate, or senior researchers to do higher priority research without necessarily being more EA-aligned
    • E.g. through creating various forms of incentives or capturing the interest of not-very-aligned people
    • E.g., through making it easier for researchers who are already quite EA-aligned to do high priority research, e.g. by making research on those topics more academically acceptable and prestiguous
  • Improving the pace, quality, dissemination, and/or use of EA-aligned research
    • E.g., helping people who would do EA-aligned research to do it using better tools, better mentorship, better resources, etc.
    • (This sequence doesn't say much about dissemination or use, and I think that that's a weakness of the sequence, but it's in theory "in-scope")

I think there's basically a lot of pipelines that intersect and have feedback loops. I also think someone can "specialise" for learning about this whole web of issues and developing interventions for them, that many interventions could help with multiple pipes/steps/whatever, etc. 

I think that this might sound frustratingly "holistic" and vague, rather than analytical and targeted. But I basically see this sequence as a fairly "birds eye view" perspective that contains within it many specifics. And as I say in the third post:

When you're currently designing, evaluating, and/or implementing an intervention for improving aspects of the EA research pipeline, you should of course also think for yourself about what goals are relevant to your specific situation

  • And you should also probably consider doing things like conducting interviews or surveys with potential “users” or “experts”.

Relatedly, I don't think this sequence has a much stronger focus on one of those pipes/paths/intervention points than on others, with the exception that I unfortunately don't say much here about dissemination and use of research.

Sam Nolan @ 2021-07-27T02:24 (+1)

Hey! I've done an audio recording of me reading this for the EA Forum podcast (I'm going to try and get the rest of this sequence in soon)