Intervention options for improving the EA-aligned research pipeline
By MichaelA🔸 @ 2021-05-28T14:26 (+49)
See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.
Summary
In a previous post, I highlighted some observations that I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. In this post, I’ll briefly discuss 19 interventions that might improve that situation. I discuss them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline.[1] The interventions are:
- Creating, scaling, and/or improving[2] EA-aligned research orgs
- Creating, scaling, and/or improving EA-aligned research training programs (e.g. certain types of internships or summer research fellowships)
- Increasing grantmaking capacity and/or improving grantmaking processes
- Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
- Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.[3]
- Increasing and/or improving research by non-EAs on high-priority topics
- Creating a central, editable database to help people choose and do research projects
- Using Elicit (an automated research assistant tool) or a similar tool
- Forecasting the impact projects will have
- Adding to and/or improving options for mentorship, feedback sources, etc. (including from peers)
- Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
- Increasing and/or improving career advice and/or support with networking
- Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
- Creating and/or improving relevant educational materials
- Creating, improving, and/or scaling market-like mechanisms for altruism (e.g., impact certificates)
- Increasing and/or improving the use of relevant online forums
- Increasing the number of EA-aligned aspiring/junior researchers
- Increasing the amount of funding available for EA-aligned research(ers)
- Discovering, writing, and/or promoting positive case studies
Feel free to skip to sections that interest you; each section should make sense by itself.
Target audience
As with the rest of this sequence:
- This post is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future
- But it may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline
(For illustration, I’ve added a comment below this post regarding how my own career, project, and donation decisions have been influenced by thinking about why and how the EA-aligned research pipeline should be improved.)
Caveats and clarifications
- Versions of many of these interventions already exist or have already been proposed
- There are various other ways to carve up the space of options, various complementary framings that can be useful, etc.[4]
- Many of these interventions would also or primarily have benefits unrelated to improving the EA-aligned research pipeline
- These interventions differ in their importance (in general or for improving the EA-aligned research pipeline specifically), neglectedness, and tractability
- And I haven’t gathered systematic data on those things[5]
- Specific versions of a given intervention, or specific combinations of those interventions, would also differ on those variables[6]
- Some of these options interventions - or some versions of them - might not actually be worthwhile or even net-positive
- These interventions differ in which aspects of the EA-aligned research pipeline they’d (primarily) improve
- To keep this post (relatively!) brief, I don’t fully explain or justify all the points I make, nor mention all the points that come to mind regarding each intervention
- I’m happy to provide further thoughts in replies to comments
- I’m sure I’ve failed to mention some promising intervention options, and I’d welcome comments that mention additional ideas (whether they’re the commenter’s own idea or something that has been proposed elsewhere)
The intervention options
Creating, scaling, and/or improving EA-aligned research orgs
- It seems like EA-aligned research orgs should house a substantial fraction of EA-aligned researchers and handle a substantial fraction of vetting, training, etc. for aspiring/junior EA-aligned researchers[7]
- Though not necessarily the majority; there is also a place for grantmakers, non-EA orgs, independent research, etc.
- There would be more capacity for that if EA-aligned research orgs were more numerous, larger, and/or better
- Some things that would help us move towards that situation include:
- Org leadership teams consciously thinking about how they can scale gracefully yet relatively quickly, designing their systems and strategies around that, building strong operations teams, hiring with that in mind (e.g., looking for people who could in future manage other hires), and providing staff with opportunities to build their management skills
- Individuals trying to build skills in or pursue roles related to management, mentorship, or perhaps operations, and perhaps considering founding new orgs
- Funders could try to fund orgs which seem likely to scale gracefully yet relatively quickly (if given more funding), fund the creation of new orgs (especially those that could scale well), and engage in “active funding” to create more such funding opportunities (see also field building)
- See also:
Creating, scaling, and/or improving EA-aligned research training programs
- See here for posts relevant to this topic, and here for a list of such programs
- These programs include things like research internships, summer research fellowships, and some volunteering programs
- Examples of efforts to create, scale, and/or improve such programs include:
- The creation of SERI
- The SERI team’s efforts to encourage and support the creation of programs similar to themselves
- My creation of a Slack workspace for people who are (or are planning to be) involved in organising such programs to exchange resources and idas, ask questions, share resources, etc.
- For one attempt to assess the impact of such a program, see Review of FHI's Summer Research Fellowship 2020
Increasing grantmaking capacity and/or improving grantmaking processes
- By “grantmaking capacity”, I mean the collective capacity grantmakers and others have to evaluate and/or create[8] funding opportunities
- I don’t mean available funding; I have a separate section below on increasing available funding
- Relevant individuals include people who work as grantmakers, other people who give donation recommendations, and people who make decisions about where to donate their own money
- Ways grantmaking capacity could be increased include hiring or training new grantmakers, increasing the time spent on grantmaking by people who currently do it part-time, distributing funding to other individuals for regranting, and creating or scaling charity evaluators
- Increasing grantmaking capacity and/or improving grantmaking processes could improve the EA-aligned research pipeline by increasing the amount and efficiency of financial support for aspiring/junior researchers and/or for work on any of the other interventions discussed in this post
- See also Benjamin Todd on what the effective altruism community most needs
Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
- I have a quite positive impression Effective Thesis[9]
- I tentatively think it’d be good for Effective Thesis to expand in some way, and/or for additional things sort-of like Effective Thesis to be created
- But I haven’t really thought about this much yet, and so:
- For all I know, it might be the case that Effective Thesis are already doing most of the most valuable and tractable things in this space
- I’m not really sure what, specifically, scaling, improving, or creating new things sort-of like Effective Thesis should look like
- If this involves new orgs/projects, they could try somewhat different strategies and approaches to those used by Effective Thesis, or specialise more for particular user groups or topic areas
- And they could share resources and learnings with Effective Thesis, and vice versa
- (My understanding is that this would be analogous to the current situation in the EA-aligned career advice space, where the relevant organisations include 80,000 Hours, Animal Advocacy Careers, and Probably Good)
- If this involves new orgs/projects, they could try somewhat different strategies and approaches to those used by Effective Thesis, or specialise more for particular user groups or topic areas
Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
- The next post in this sequence will focus on this idea, so I won’t discuss it here
Increasing and/or improving research by non-EAs on high-priority topics
- See also field building
- On a somewhat abstract level, this could be done through things like:
- Increasing awareness of and inclination towards these topics among non-EAs
- Funding work on these topics
- Funding the creation of non-EA orgs, institutes, etc. focused on these topics (e.g., CSET[10])
- Making it (seem) easier to publish respectable papers on these topics
- Running conferences or workshops on these topics
- Increasing interactions between EA and non-EA researchers
- Providing guidance to non-EA research on these topics
- Shifting academic norms and incentives towards choosing research for its impact potential
- More concretely, this could be done through things like:
- Organising workshops on the topic
- Publishing papers on a high-priority topic (which could raise the topic’s salience, make publishing on it seem more acceptable, give people things to cite)
- Inviting non-EAs to visit EA research institutes/orgs
- Providing the kind of resources and coaching Effective Thesis provides
- Scoping EA-aligned research directions in a way that makes them easier for people working in traditional academia to learn about, see the relevance of, connect to established disciplines, and work on
- The GovAI and GPI research agendas could be seen as two examples of this sort of effort
- Creating prizes or awards for the best research on a topic, and trying to make the prize/award sufficiently large, prestigious, and well-advertised in relevant places that top or promising non-EA researchers are drawn towards it
- In addition to improve the pipeline for EA-aligned research produced by non-EAs, this might also improve the pipeline for EA-aligned researchers, such as by:
- Causing longer-term shifts in the views of some of the non-EAs reached
- Making it easier for EAs’ to use non-EA options for research training, credentials, etc. (see my next post)
- And these benefits could perhaps be huge, as the vast majority of all research talent, funding, hours, etc. are outside of EA
- On the other hand, it may be less tractable for “us” to increase and/or improve that pool of talent, funding, hours, etc., compared to doing so for the EA pool
Creating a central, editable database to help people choose and do research projects
- The sixth post in this sequence will focus on this idea
Using Elicit (an automated research assistant tool) or a similar tool
- The sixth post in this sequence will discuss this idea
Forecasting the impact projects will have
- The sixth post in this sequence will discuss this idea
Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)
This could include things like:
- Encouraging and facilitating aspiring/junior researchers in connecting with each other to get feedback on plans, get feedback on drafts, collaborate, start coworking teams, and run focused practice sessions[11]
- E.g., creating spaces like Effective Altruism Editing and Review
- E.g., circulating advice and links like those contained in Notes on EA-related research, writing, testing fit, learning, and the Forum
- E.g., perhaps, creating platforms like Impact CoLabs
- (Those are just the first three examples that came to mind; there are probably other, quite different ways to achieve this goal)
- Encouraging and facilitating aspiring/junior researchers and more experienced researchers to connect in similar ways
- This could involve the aspiring/junior researcher acting as a research assistant
- This could involve the more experienced researchers delegating some research tasks/projects that they wanted done anyway
- This could help align the incentives of the more and less experienced researchers, including incentivising high-quality feedback
- This could be paid or unpaid (i.e., volunteering)
- One example of a project that arguably serves this purpose is READI
- Creating, promoting, and/or engaging with resources on how to more efficiently and effectively seek or provide mentorship, feedback, etc.
- E.g., writing posts like Giving and receiving feedback and Asking for advice
- E.g., participating in a (non-EA) course on mentorship, coaching, or management, in order to then be better at providing those services to aspiring/junior researchers
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
For example:
- Improving selection processes at EA-aligned research organisations
- Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
- This already happens, but could perhaps be improved by:
- Increasing how often it happens
- Increasing how well-targeted the referrals are
- Increasing the amount of information provided to the second selection process?
- Increasing how much of the second selection process the candidate can “skip”?
- This already happens, but could perhaps be improved by:
- Creating something like a "Triplebyte for EA researchers", which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers[12]
- This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers
Increasing and/or improving career advice and/or support with network-building
Examples of existing efforts along these lines include:
- 80,000 Hours
- Animal Advocacy Careers
- Probably Good
- Many local EA groups
- Parts of what the Improving Institutional Decision-Making working group and the Simon Institute for Longterm Governance do
- In particular, my understanding is that these groups help provide some career advice and connections in their particular areas of expertise
Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
- For example, CEEALAR (formerly the EA Hotel) provides free or cheap accommodation and board to people engaging in these sorts of activities
- One could set up similar things in other locations, or find other ways to reduce the financial costs of taking time to engage in these activities
- Note that here I don’t mean providing funding to support people in doing these activities
- That also seems valuable, but is covered in other sections of this post
Creating and/or improving relevant educational materials[13]
- Such materials could include courses, workshops, textbooks, standalone writings that are shorter than textbooks (e.g., posts), or sequences of such shorter writings
- Existing examples include Charity Entrepreneurship’s writings about their research process, parts of Charity Entrepreneurship’s handbook, posts tagged Research methods, and posts tagged Scholarship & Learning
- Topics these materials could focus on include on doing research in general, aspects of doing EA-aligned research that differ from research in other contexts, EA-aligned research using particular disciplines or methodologies, or research on particular EA-relevant topics
- These materials could be created by EAs, adapted by EAs from existing things, or commissioned by EAs by created by other people
- (Of course, non-EAs left to their own device also make many relevant materials; on how that could be used, see “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”)
Creating, improving, and/or scaling market-like mechanisms for altruism
- See Markets for altruism and Certificates of impact
- This could potentially have benefits such as improving prioritisation and providing a more efficient and scalable system of vetting research projects for funding
Increasing and/or improving the use of relevant online forums
- I think many aspiring/junior researchers would benefit from using the EA Forum and/or LessWrong to:
- learn about important ideas
- discover or think of research questions
- find motivation and a sense of accountability for doing research and writing (since they’re doing it for an actual audience)
- disseminate their findings/ideas
- get feedback
- find collaborators
- form connections
- etc.
- See also Reasons for and against posting on the EA Forum
- I also think it would be possible and valuable to increase how often these sites are used - and how useful they are - for those purposes
- It seems to me that impressive increases have already occurred since late 2018 (when I first started looked at the Forum)
- Increasing the usefulness of these sites could include things like adding new features or integrating these sites with other interventions for improving the EA-aligned research pipeline (e.g., the database idea discussed in my next post)
- (But here I should again note that, as with all interventions mentioned in this post, this wouldn’t address all the current imperfections in the EA-aligned research pipeline, nor render all the other interventions unnecessary)
- My post Notes on EA-related research, writing, testing fit, learning, and the Forum is an example of an effort to increase and improve the use of relevant online forums, and also links to other examples of such an effort
Increasing the number of EA-aligned aspiring/junior researchers
- The number of people “entering” or “in” the pipeline doesn’t seem to be as important a bottleneck as some other things (e.g., people with the specific skills necessary for specific projects, capacity to train more such people, capacity to put those people to good use, and capacity to vet people/projects; see Todd, 2020)
- But more people in the pipeline would still likely lead to:
- more people eventually becoming useful EA-aligned researchers
- more fitting people being selected for the EA-aligned research roles/funding that would’ve been available anyway (since there’s a larger pool of people to select from; see also How replaceable are the top candidates in large hiring rounds?)
- On the other hand, this comes at the opportunity cost of whatever else these people would’ve spent their time on otherwise
- Additionally, more people in the pipeline might have negative consequences other than opportunity cost, such as:
- People being turned off EA more generally because of frustration over repeatedly being declined jobs or funding (whereas those people may have found more success in other paths)
- Making various forms of coordination, cooperation, and trust harder or less valuable
- Leading to more low-quality or incautious work or messaging, reducing the credibility of EA-aligned research communities
- (See also value of movement growth)
Increasing the amount of funding available for EA-aligned research(ers)
- As with the number of EA-aligned aspiring/junior researchers, funding for EA-aligned research(ers) doesn’t seem to be as important a bottleneck as some other things, but more funding would still help
- Here I’m talking about increasing the funding available for activities whose primary goal is relatively directly leading to directly valuable research
- In contrast, increasing the funding available for activities whose primary goal is improving the EA-aligned research pipeline - e.g., by supporting one of the interventions in this post - may better target the key bottlenecks and thus be more valuable
- (Of course, many activities may have both types of goals, and sometimes with roughly equal weight)
- I’m also only talking about the amount of funding available, not about how much high-priority research actually gets funded, since the latter also depends on other things such as grantmaking capacity and what projects/people are available to be funded
Discovering, writing, and/or promoting positive case studies
- Discussed in a comment below this post
If you have thoughts on these interventions or other interventions to achieve a similar goal, or would be interested in supporting such interventions with your time or money, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts, allow me to connect you with other people you could collaborate with or fund, etc.
Though it’s hard to even say what that means, let alone how much anyone should trust my quick rankings; see also the “Caveats and clarifications” section. ↩︎
Note that even good things can be made better! ↩︎
I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎
For example, one could view each of these intervention options through the lens of creating and/or improving “hierarchical network structures” (see What to do with people?). ↩︎
But I think it would be possible and valuable to do so. E.g., one could find many examples of people who were hired as a researcher at an EA-aligned org, went through an EA-aligned research training program, or did a PhD under a non-EA supervisor; look at what they’ve done since then; and try to compare that to some reasonable guesses about the counterfactual and/or people who seemed similar but didn’t have those experiences. (I know of at least one attempt to do roughly this.) It would of course be hard to be confident about causation and generalisability, but I think we’d still learn more than we know now. ↩︎
For example, creating, scaling, and/or improving EA-aligned research organisations and doing the same for EA-aligned research training programs might be complementary goods; more of the former means more permanent openings for the “graduates” of those programs, and more of the latter means more skilled, motivated, and vetted candidates for those orgs. ↩︎
For convenience, I’ll sometimes lump various different types of people together under the label “aspiring/junior researchers”. I say more about this group of people in a previous post of this sequence. ↩︎
See “active funding”. See also field building. ↩︎
This is based on reading some of what they’ve written about their activities, strategy, and impact assessment; talking to people involved in the project; and my more general thinking about what the EA-aligned research pipeline needs. But I haven’t been an Effective Thesis coach or mentee myself, nor have I tried to carefully evaluate their impact. ↩︎
The original Director of CSET and several of its staff have been involved in the EA community, but many other members of staff are not involved in EA. ↩︎
See, for example, Learnings about literature review strategy from research practice sessions. ↩︎
This idea was suggested as a possibility by Peter Hurford. See some thoughts on the idea here. ↩︎
I’m grateful to Edo Arad for suggesting I include roughly this intervention idea. ↩︎
Linch @ 2021-06-11T04:40 (+24)
Notably missing from this list, but related to 5,11, and 17 (and arguably 1 and 18) is increasing the number and EA alignment of currently non-EA or weakly EA-aligned senior researchers.
That is, increasing the number of senior EA aligned researchers not via the pipeline of
get interested in EA-> be a junior EA researcher -> be a intermediate EA researcher -> be a senior EA researcher,
but via
be a senior researcher -> get interested in EA -> be a senior EA researcher.
I don't have very obvious examples in mind, but potential case studies so far include Phillip Tetlock, David Roodman, Rachel Glennester, Michael Kremer, Kevin Esvelt, and Stuart Russell.
MichaelA @ 2021-06-11T06:04 (+2)
Yeah, I think this is a quite important point that's sort-of captured by the other paths you mention, but (in hindsight) not sufficiently highlighted/emphasised.
I think another possible example is Allan Dafoe - I don't know his full "origin story", and it's possible he was already very EA-aligned as a junior researcher, but I think his actual topic selection and who he worked with switched quite a lot (and in an EA-aligned direction) after he was already fairly senior. And that seniority allowed him to play a key role in GovAI, which was (in my view) extremely valuable.
One place where I kind-of nod to the path you mention is:
Increasing and/or improving research by non-EAs on high-priority topics [...]
In addition to improve the pipeline for EA-aligned research produced by non-EAs, this might also improve the pipeline for EA-aligned researchers, such as by:
- Causing longer-term shifts in the views of some of the non-EAs reached
- Making it easier for EAs’ to use non-EA options for research training, credentials, etc. (see my next post)
HowieL @ 2021-06-11T17:46 (+7)
I don't think Alan's really an example of this.
I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It shows a dragonfly and then, I don’t know, a primate, and then a human, and then all humans.
Now, that correspondence is hugely problematic. There’s lots we could say about why that’s not a sensible thing to do, but what I think it did communicate was that the likely extrapolation of trends are such that you are going to have very powerful computers within a hundred years. Who knows exactly what that means and whether, in what sense, it’s human level or whatnot, but the fact that this trend is coming on the timescale it was was very compelling to me. But at the time, I thought Kurzweil’s projection of the social dynamics of how extremely advanced AI would play out unlikely. It’s very optimistic and utopian. I actually looked for a way to study this all through my undergrad. I took courses. I taught courses on technology and society, and I thought about going into science writing.
And I started a PhD program in science and technology studies at Cornell University, which sounded vague and general enough that I could study AI and humanity, but it turns out science and technology studies, especially at Cornell, means more a social constructivist approach to science and technology.
. . .
Okay. Anyhow, I went into political science because … Actually, I initially wanted to study AI in something, and I was going to look at labor implications of AI. Then, I became distracted as it were by a great power politics and great power peace and war. It touched on the existential risk dimensions that I didn’t have the word for it, but was sort of a driving interest of mine. It’s strategic, which is interesting. Anyhow, that’s what I did my PhD on, and topics related to that, and then my early career at Yale.
I should say during all this time, I was still fascinated by AI. At social events or having a chat with a friend, I would often turn to AI and the future of humanity and often conclude a conversation by saying, “But don’t worry, we still have time because machines are still worse than humans at Go.” Right? Here is a game that’s well defined. It’s perfect information, two players, zero-sum. The fact that a machine can’t beat us at Go means we have some time before they’re writing better poems than us, before they’re making better investments than us, before they’re leading countries.
Well, in 2016, DeepMind revealed AlphaGo, and it was almost this canary in the coal mine, that Go was to me, that was sort of deep in my subconscious keeled over and died. That sort of activated me. I realized that for a long time, I’d said post tenure I would start working on AI. Then, with that, I realized that we couldn’t wait. I actually reached out to Nick Bostrom at the Future of Humanity Institute and began conversations and collaboration with them. It’s been exciting and lots of work to do that we’ve been busy with ever since.
https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/
MichaelA @ 2021-06-11T17:58 (+2)
I think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn't aware of or engaged with the EA community (though he doesn't explicitly say that), and so he still seems like sort-of an example.
It also sounds like he wasn't actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about - in this case, maybe the "intervention (for improving the EA-aligned research pipeline)" was something like Bostrom's public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention?
(But that's just going from that quote and my vague knowledge of Allan.)
HowieL @ 2021-06-11T21:18 (+2)
Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.
Linch @ 2021-06-11T21:23 (+4)
I think the crux to me is to what extent Allan's involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom's calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).
MichaelA @ 2021-06-12T05:57 (+2)
I roughly agree, though would also note that the step could be useful by merely speeding up an overdetermined career move, e.g. if Allan would've ended up doing similar stuff anyway but only 5 years later.
Linch @ 2021-06-12T11:06 (+2)
Yes, I agree that speeding up career moves is useful.
MichaelA @ 2021-05-28T14:34 (+14)
Some quick notes on how my own career, project, and donation decisions have been influenced by thinking about the value of and methods for improving the EA-aligned research pipeline
(Note that most of these decisions were made before I drafted this sequence of posts, and thus weren’t based on my latest thinking. Also, I am likely missing some relevant things and will fail to explain some things well. Finally, as usual, this comment expresses my personal views only.)
Career decisions:
- Thinking about the EA-aligned research pipeline was a key factor in me choosing to work for Rethink Priorities
- I got other appealing job offers at the same time as the RP offer
- A key selling point for RP for me was that, as far as I could tell before joining RP, RP had done well at scaling, being strategic, and assessing its impact, and seemed set to continue to do so
- And it seemed like I could be a good fit for helping scale the longtermism team, e.g. through later taking on management responsibilities and helping develop RP's longtermist research agendas/priorities
- I am now more confident that those guesses were correct, and that it made accept the RP offer partly for these reasons
- I’m currently focusing mostly on testing and improving my fit for research management roles/activities
- I’ve also taken some steps to test my fit for grantmaking, and am likely to take more such steps soon
Project decisions:
- I’ve spent a substantial amount of time helping with aspects of RP's first research internship program
- I’ve spent a substantial amount of time supporting other research training programs
- E.g., sharing resources, creating a Slack workspace for people involved in these programs, reviewing and giving advice on strategic plans, acting as a mentor
- I’ve spent a substantial amount of time having calls with EAs who are aspiring/junior researchers, to help with things like career planning, topic selection, connecting them to relevant people
- I’ve written some relevant posts or docs, such as:
- Notes on EA-related research, writing, testing fit, learning, and the Forum
- A central directory for open research questions
- Reasons for and against posting on the EA Forum
- Suggestion: EAs should post more summaries and collections
- Readings and notes on how to do high-impact research
- Potential benefits and downsides of making and/or sharing a research agenda [this will be posted soon]
- This sequence itself, of course!
- I’ve made some relevant EA Forum Wiki tags, particularly scalably using labour and research training programs
- I’m hoping to in some way help set up the sort of research questions database I'll describe later in this sequence
Donation decisions:
- A desire to improve the EA-aligned research pipeline was a notable factor in me donating to ALLFED and GCRI in 2020
- Though not the single largest factor
- I explained those donation decisions here
- I’m considering donating this year to Effective Thesis and/or to someone who’s excited about working on the database idea I’ll describe in a later post
Jamie_Harris @ 2021-07-16T07:43 (+10)
Just came here to comment something that's been on my mind that I didn't recall being suggested in the post, though it partly overlaps with your suggestions 1, 2, 4, 11, and 19.
Suggestion: Paid literature reviews with some (relatively low level) supervision.
Context: Since working at Sentience Institute, I've done quite a few literature reviews. (I've also done some more "rough and ready" ones at Animal Advocacy Careers.) I think that these have given me a much better understanding of how social sciences academia works, what sort of information is most helpful etc. A lot of the knowledge comes in handy in places that I wouldn't necessarily have predicted, too. This makes me feel like the benefits might be comparable to the sorts of benefits that I expect lots of people get from PhDs -- some methodological training / familiarity, and some useful knowledge. It wouldn't give you some benefits of PhDs like signalling value, familiarity with the peer review process, or close mentorship relationships, but if you tried to get the literature reviews published in peer-reviewed journals, then that would add some of those benefits back in (and maybe help to improve the end product too).
Lit reviews can be quite time-consuming, but don't necessarily require any very special skills -- just willingness to spend time on it and look things up (e.g. methodological aspects) when you don't know or understand them, rather than plowing on regardless. Obviously some methodological background in the topic would be helpful, but doesn't always seem necessary; I'm a history grad and have done literature reviews on subjects from psychology to ethics to management.
It might be quite easy to explicitly offer (1) funding and (2) facilitation for independent researchers to be connected to potential reviewers of the end product. It could be up to the individual to suggest topics, or to some centralised body (as in your suggestion 7).
I'm not sure whose responsibility this should be. It could be EA Funds, Effective Thesis, or individual research orgs.
Caveats
- I have found review + comments from colleagues helpful, so some supervision may be necessary, but these have tended to cluster at the start and end of projects with the vast majority of the work being independent.
- To do rigorous systematic reviews, you generally want more than one person actually checking through the data, coding decisions etc, which would require more coordination. But this is not always necessary. Indeed, one of my lit reviews is currently going through the peer review process (and looks likely to be accepted) and didn't use multiple author checks on these decisions. And less formal/systematic literature reviews can still be valuable, I think, both for the researcher and the readers.
MichaelA @ 2021-07-16T10:23 (+4)
Thanks! Yeah, this seems like a handy idea.
I was recently reminded of the "Take action" / "Get involved" page on effectivealtruism.org, and I now see that that actually includes a page on Write a literature review or meta-analysis. That Take action page seems useful, and should maybe be highlighted more often. In retrospect, I probably should've linked to various bits of it from this post.
Jamie_Harris @ 2021-07-17T07:13 (+4)
True! I'd forgotten about that page. I think some sort of fairly minimal infrastructure might notably increase the number of people actually doing it though.
MichaelA @ 2021-07-17T08:05 (+2)
(Yeah, I didn't mean that this meant your comment wasn't useful or that it wouldn't be a good idea to set up some sort of intervention to support this idea. I do hope someone sets up such an intervention, and I may try to help that happen sometime in future if I get more time or think of a particularly easy and high-leverage way to do so.)
MichaelA @ 2021-06-06T15:02 (+5)
In the EA Infrastructure Fund's Ask Us Anything, I asked for their thoughts on the sorts of topics covered in this sequence, e.g. their thoughts on the intervention options mentioned in this post. I'll quote Buck's interesting reply in full. See here for precisely what I asked and for replies to Buck's reply (including me agreeing or pushing back on some things).
---
"Re your 19 interventions, here are my quick takes on all of them
Creating, scaling, and/or improving EA-aligned research orgs
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
Creating, scaling, and/or improving EA-aligned research training programs
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Increasing grantmaking capacity and/or improving grantmaking processes
Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
I'm not confident.
The post doesn't seem to exist yet so idk
Increasing and/or improving research by non-EAs on high-priority topics
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
Creating a central, editable database to help people choose and do research projects
I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.
Using Elicit (an automated research assistant tool) or a similar tool
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.
I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.
I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Increasing and/or improving career advice and/or support with network-building
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.
Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs
Creating, improving, and/or scaling market-like mechanisms for altruism
I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.
Increasing and/or improving the use of relevant online forums
Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".
Increasing the number of EA-aligned aspiring/junior researchers
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
Increasing the amount of funding available for EA-aligned research(ers)
This seems almost entirely useless; I don't think this would help at all.
discovering, writing, and/or promoting positive case studies
Seems like a good use of someone's time.
---------------
This was a pretty good list of suggestions. I guess my takeaways from this are:
- I care a lot about access to mentorship
- I think that people who are willing to talk to lots of new people are a scarce and valuable resource
- I think that most of the good that can be done in this space looks a lot more like "do a long schlep" than "implement this one relatively cheap thing, like making a website for a database of projects"."
Ben_Snodin @ 2021-06-01T09:58 (+3)
Thanks, I think this is a great topic and this seems like a useful list (although I do find reading through 19 different types of options without much structure a bit overwhelming!).
I'll just ~repost a private comment I made before.
Encouraging and facilitating aspiring/junior researchers and more experienced researchers to connect in similar ways
This feels like an especially promising area to me. I'd guess there are lots of cases where this would be very beneficial for the junior researcher and at least a bit beneficial for the experienced researcher. It just needs facilitation (or something else, e.g. a culture change where people try harder to make this happen themselves, some strong public encouragement to juniors to make this happen, ...).
This isn't based on really strong evidence, maybe mostly my own (limited) experience + assuming at least some experienced researchers are similar to me. And that there are lots of excellent junior researcher candidates out there (again from first hand impressions).
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
This also seems like a big deal and an area where maybe you could improve things significantly with a relatively small amount of effort. I don't have great context here though.
MichaelA @ 2021-06-01T19:00 (+3)
Thanks for these thoughts!
although I do find reading through 19 different types of options without much structure a bit overwhelming!
Interesting. I received similar feedback on the previous post in the sequence, and re-organised it into "clusters" in response to that. And I've received similar feedback on a separate, upcoming draft of mine that also has a big list of things, and due to that feedback I plan to organise that list into clusters before publishing the post. Maybe this is a recurring issue with my writing that I should be on the lookout for. So thanks for that feedback :)
I guess this also relates to my caveat that "There are various other ways to carve up the space of options, various complementary framings that can be useful, etc.", and to me trying to produce these posts relatively quickly and to be relatively thorough. I expect with more time, I could come up with better ways to organise the space of options - e.g. via creating diagrams representing various different pathways to getting more EA-aligned research or researchers, showing how each intervention could connect to one or more steps on those pathways, and then somehow using that to organise the interventions into broad types and then subtypes. (And if someone else did that, I'd be interested to read what they come up with!)
Ben_Snodin @ 2021-06-02T09:44 (+8)
One (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).
MichaelA @ 2021-06-02T13:01 (+3)
Ah, yes, this is probably useful and definitely low-effort (I've now done it in 1 minute, due to your comment).
The list was actually already in order of how promising I think they are, and I mentioned that in footnote 1. But I shouldn't expect people to read footnotes, and your feedback plus that other feedback I got on other posts suggests that readers want that sort of thing enough / find it useful enough that that should be said in the main text. So I've now moved that info to the main text (in the summary, before I list the 19 interventions).
I think the main reason I originally put it in a footnote is that it's hard to know what my ranking really means (since each intervention could be done in many different ways, which would vary in their value) or how much to trust it. But my ranking is still probably better than the ranking a reader would form, or than an absence of ranking, given that I've spent more time thinking about this. Going forward, I'll be more inclined to just clearly tell readers things like my ranking, and less focused on avoiding "anchoring" them or things like that.
(So thanks again for the feedback!)
MichaelA @ 2021-05-28T14:29 (+2)
Additional intervention ideas
Here I’ll keep track of additional intervention ideas that have occurred to me since I finished drafting this post. Perhaps in future I’ll integrate some into the post itself.
- Creating and/or improving EA-relevant journals
- Could draw more people towards paying attention to important topics
- Could make it easier for EAs doing graduate programs (especially PhDs) or pursuing academic careers to focus on high-priority topics and pursue them in the most impactful ways
- That could in turn help with “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”
- Making high-quality data that’s relevant to high-priority topics more easily available
- The idea here is that “a lot of researchers will follow good data wherever it comes from”
- (This was suggested by a commenter on a draft of this post)
MichaelA @ 2021-06-23T06:47 (+4)
Red teaming papers as an EA training exercise?
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad with a decent science or social science degree.
I think this is good career building for various reasons:
- you can develop a healthy skepticism of the existing EA orthodoxy
- I mean skepticism that's grounded in specific beliefs about why things ought to be different, rather than just vague "weirdness heuristics" or feeling like the goals of EA conflict with other tribal goals.
- you actually deeply understand at least one topic well enough to point out errors
- creates legible career capital (at least within EA)
- requires relatively little training/guidance from external mentors, meaning
- our movement devotes less scarce mentorship resources into this
- people with worse social skills/network/geographical situation don't feel (as much) at a disadvantage for getting the relevant training
- you can start forming your own opinions/intuitions of both object-level and meta-level heuristics for what things are likely to be correct vs wrong.
- In some cases, the errors are actually quite big, and worth correcting (relevant parts of ) the EA movement on.
Main "cons" I can think of:
- I'm not aware of anybody successfully doing a really good critique for the sake of doing a really good critique. The most exciting things I'm aware of (publicly, zdgroff's critique of Ng's original paper on wild animal suffering, alexrjl's critique of Giving Green. I also have private examples) mostly comes from people trying to deeply understand a thing for themselves, and then along the way spotting errors with existing work.
- It's possible that doing deliberate "red-teaming" would make one predisposed to spot trivial issues rather than serious ones, or falsely identify issues where there aren't any.
- Maybe critiques are a less important skill to develop than forming your own vision/research direction and executing on it, and telling people to train for this skill might actively hinder their ability to be bold & imaginative?
(See also the comments on the shortform.)
MichaelA @ 2021-06-23T06:48 (+2)
An idea from Buck (see also the comments on the linked shortform itself):
Here's a crazy idea. I haven't run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
Basic structure:
- Someone picks a book they want to review.
- Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
- They write a review, and send it to me.
- If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
- If I don’t want to give them the money, they can do whatever with the review.
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
- Things directly related to traditional EA topics
- Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
- I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.
Goals:
- I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
- It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
- I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
- Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
- It might surface some talented writers and thinkers who weren’t otherwise known to EA.
- It might produce good content on the EA Forum and LW that engages intellectually curious people.
Suggested elements of a book review:
- One paragraph summary of the book
- How compelling you found the book’s thesis, and why
- The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
- Optionally, epistemic spot checks
- Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
MichaelA @ 2021-06-02T19:48 (+2)
Rough notes on another idea, following a call I just had:
- Setting up something in between a research training program and a system for collaborations in high schools, universities, or local EA groups
- Less vetting and probably lower average current knowledge, aptitude, etc. than research training program participants undergo/have
- But this reduces the costs for vetting
- And this opens this up to an additional pool of people (who may not yet be able to pass that vetting)
- Plus, this could allow more people to test their fit for and get better at mentorship, by mentoring people in these "programs" or simply by collaborating with peers in these programs (since collaboration still has some mentorship-like elements)
- E.g., in some cases, someone's who just started a PhD student or just recently learned about the cause area they're now focused on may not be able to usefully serve as a mentor for a participant in a research training program like SERI, but they may be able to usefully serve as a mentor for a high school student or some other undergrads
- (I'm just saying there'd be some cases in that space in between - there'd also be some e.g. PhD students who can usefully serve as mentors for SERI fellows, and some who can't usefully serve as mentors for high school students)
- E.g., in some cases, someone's who just started a PhD student or just recently learned about the cause area they're now focused on may not be able to usefully serve as a mentor for a participant in a research training program like SERI, but they may be able to usefully serve as a mentor for a high school student or some other undergrads
MichaelA @ 2021-05-28T14:29 (+2)
Complementary perspectives/framings that didn’t quite fit into this post
David Janku of Effective Thesis has written about interventions - other than Effective Thesis which also aim to influence which research is generated. I recommend reading that section, but here’s the list of interventions with the explanations and commentary removed:
- influencing individuals by giving them information on what the potentially most impactful directions are and motivating them to pursue these directions
- providing funding for research directions that seem promising
- setting up research organisations producing research in a specific direction
- organising research workshops
- setting up prestigious prizes/awards
- providing mentorship and space for exploration
David adds that an additional approach which doesn't aim to influence which research is generated is “coordination - e.g. connecting students/researchers interested in the same topics”.
---
Meanwhile, Jonas Vollmer of EA Funds has written that, to achieve one possible vision for the EA Long-Term Future Fund:
we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months).
I think that similar points could also be made for longtermist grantmaking by other actors (e.g., Open Philanthropy) and for grantmaking in some other areas (e.g., I’m guessing, wild animal welfare). And I think many of the interventions mentioned in this post might help address those needs.
MichaelA @ 2021-05-28T14:28 (+2)
Here are my thoughts on discovering, writing, and/or promoting positive case studies (moved to a comment since I tentatively think this intervention would be less valuable than the others):
- I know of some cases (in addition to me) of people who are now doing impactful EA-aligned research and got to that point partly via something related to one of the interventions discussed elsewhere in this post or sequence
- E.g., via doing independent research/writing published on the EA Forum, choosing a thesis and getting mentored via Effective Thesis, or doing a research training program
- But I mostly know these cases because I’m now well-networked in EA, rather than because of easily findable public writeups. And I’d also guess that there are many more cases that I’m not aware of.
- This could cause people to underestimate how achievable this is, underestimate the value of these “interventions” (e.g., writing on the Forum), or simply have a harder time motivating themselves to try (since success doesn’t feel like a real possibility)
- So maybe it’d be valuable to simply:
- Find and collect a larger set of positive case studies
- Write many of them up (or record podcasts or videos or whatever)
- Promote those writeups (or whatever) in such a way that they’ll be found by the people who’d benefit from them
- E.g., so that the relevant people would stumble upon these case studies, or so that the people they’d reach out to (e.g., community-builders offering careers advice) would know to mention these case studies
- This process could also provide useful data on which methods of entering and progressing through the EA-aligned research pipeline have been used, how successful the methods have been, how they could be supported, etc.^[Though I think the data collection that would be best for directly encouraging and guiding aspiring/junior researchers would differ from that which is best for guiding efforts to improve the pipeline.]
- I haven’t thought much about how best to do this, who would be best placed to do it, how valuable it’d be, or what the most similar existing things are
- Obviously there are already some things like case studies of successful-seeming EA-aligned careers, including research ones.
- Maybe WANBAM have done something similar specifically for women, trans people of any gender, and non-binary people?
- Obvious downside risk: Focusing solely on positive case studies could mislead people about how easy these pathways are and cause them to overly focus on pursuing research roles or roles at explicitly EA orgs
MichaelA @ 2021-05-28T14:27 (+2)
Readers of this post may also be interested in my rough collection of Readings and notes on how to do high-impact research.
Tristan Williams @ 2023-05-08T20:04 (+1)
You can update the EA CoLabs link (under Adding to and/or improving options...) with their website (Impact Colabs) which is a more functional update to this I think.
MichaelA @ 2023-05-09T15:44 (+3)
Thanks - done :)