We’re Rethink Priorities. Ask us anything!

By Peter Wildeford @ 2021-11-15T16:25 (+102)

Hi all,

We're the staff at Rethink Priorities and we would like you to Ask Us Anything! We'll be answering all questions starting Friday, November 19.

About the Org

Rethink Priorities is an EA research organization focused on helping improve decisions among funders and key decision-makers within EA and EA-aligned organizations. You might know of our work on quantifying the number of farmed vertebrates and invertebrates, interspecies comparisons of moral weight, ballot initiatives as a tool for EAs, the risk of nuclear winter, or running the EA Survey, among other projects. You can see all of our work to date here.

Over the next few years, we’re expanding our farmed animal welfare and moral weight research programs, launching an AI governance and strategy research program, and continuing to grow our new global health and development wing (including evaluating climate change interventions).

Team

You can find bios of our team members here. Links on names below go to RP publications by the author (if any are publicly available at this point).

Leadership

Animal Welfare

Longtermism

Surveys and EA movement research

Global Health and Development

Operations

Ask Us Anything

Please ask us anything — about the org and how we operate, about the staff, about our research… anything!

You can read more about us in our 2021 Impact and 2022 Strategy update or visit our website: rethinkpriorities.org.

If you're interested in hearing more, please subscribe to our newsletter.

Also, we’re currently raising funds to continue growing in 2022. We consider ourselves funding constrained — we continue to get far more qualified applicants to our roles than we are able to hire, and have scalable infrastructure to support far more research. We accept and track restricted funds by cause area if that is of interest.

If you'd like to support our work, visit https://www.rethinkpriorities.org/donate, give on Giving Tuesday via Facebook to potentially secure matching funds, or email Janique Behman at janique@rethinkpriorities.org.

We'll be answering all questions starting Friday, November 19.


NunoSempere @ 2021-11-16T11:26 (+62)

In your yearly report you mention:

Rethink Priorities has been trusted by EA Funds and Open Philanthropy to start new projects (e.g., on capacity for welfare of different animal species) and open entire new departments (such as AI governance).

These and other large organizations often only fund 25–50% of our needs in any particular area because they trust our ability to find other sources of funding. Therefore we rely on a broad range of individual donors to continue our work.

This surprised me, because I fairly often hear the advice of "donate to EA Funds" as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?

Peter Wildeford @ 2021-11-19T23:06 (+17)

I think donating to the EA Funds is a very good thing to do, but I don't think every donor should do this. I think for donors who have the time and personal fit, it would be good to do some direct donations on your own and support organizations to help those organizations hedge against idiosyncratic risk from particular funders and help give them more individual support (which matters for showing proof to other funders and also matters for some IRS stuff).

I don't think any one funder likes to fund the entirety of an organization's budget, especially when that budget is large. But between the different institutional funders (EA Funds, Survival and Flourishing Fund, OpenPhil, etc.), I still think there is a strong (but not guaranteed) chance we will be funded (at least enough to meet somewhere between our "Low" and "High" budget amounts). Though if everyone assumed we were not funding constrained, than we definitely would be.

My other pitch is that I'd like RP, as an organization, to have some direct financial incentive and accountability to the EA community as a whole, above and beyond our specific institutional funders who have specific desires and fund us for specific reasons that don't always match what the community as a whole wants or needs.

Lastly, if you trust us, we also value unrestricted funds highly (probably 1.5x-2x per dollar) because this allows us to start new research areas and programs that have less pre-existing proof/traction and get them to a point where they are ready to show bigger funders.

JoshYou @ 2021-11-15T17:28 (+43)

A couple of years it seemed like the conventional wisdom was that there were serious ops/management/something bottlenecks in converting money into direct work. But now you've hired a lot of people in a short time. How did you manage to bypass those bottlenecks and have there been any downsides to hiring so quickly?

abrahamrowe @ 2021-11-19T13:45 (+22)

So there are a bunch of questions in this, but I can answer some of the ops related one:

  • We haven't had ops talent bottlenecks. We've had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I'd guess that 80%+ of our finalists are at least familiar with EA (which I don't  think is a necessary requirement, but the explanation isn't that we are recruiting from a different pool I guess).
    • Maybe there was a bigger bottleneck in ~2018 and EA has grown a lot since or reached people with more ops skills since?
    • We spend a lot of time resources on recruiting, and advertise our jobs really widely, so maybe we are reaching a lot more potential candidates than some other organizations were?
  • Management bottlenecks are probably our biggest current people-related constraint on growth (funding is a bigger constraint).
    • We've worked a lot on addressing this over the summer, partially by having a huge internship program, and getting a lot of current staff management experience (while also working with awesome interns on cool projects!) and sending  anyone who wants it through basic management training.
    • My impression is that we've gotten many more qualified applications in recent manager hiring pools.
  • Bypassing bottlenecks
    • In general, I think we haven't experienced these as much as other groups (at least so far)
    • We tend to hire ops staff prior to growth, as opposed to hiring them when we need them to take on work immediately (e.g. we hire ops staff when things are fine, but we plan to grow in a few months, so the infrastructure can be in place for expansion, as opposed to hiring ops staff when the current ops staff has too much on their plate, or something).
    • We do a ton of prep to ensure that we are careful while scaling, thinking about how processes would scale, etc.
    • The above mentioned intern program really stress-tested a lot of processes (we doubled in size for 3 months), and has been really helpful for addressing issues that come with scaling.
  • Downsides to hiring quickly
    • I'd say that we've seen a mild amount to the downsides to growing in general, though it hasn't necessarily been related to speed of hiring - e.g. mildly more siloing of people, people not sure what other people are working on, etc. and we've been taking a lot of steps to try to mitigate this, especially as we get larger.
MichaelA @ 2021-11-19T16:04 (+22)

Here's some parts of my personal take (which overlaps with what Abraham said):

I think we ourselves feel a bit unsure "why we're special", i.e. why it seems there aren't very many other EA-aligned orgs scaling this rapidly & gracefully.

But my guess is that some of the main factors are:

  • We want to scale rapidly & gracefully
    • Some orgs have a more niche purpose that doesn't really require scaling, or may be led by people who are more skilled and excited about their object-level work than about org strategy, scaling, management, etc.
  • RP thinks strategically about how to scale rapidly & gracefully, including thinking ahead about what RP will need later and what might break by default
    • Three of the examples I often give are ones Abraham mentioned:
      • Realising RP will be be management capacity constrained, and that it would therefore be valuable to give our researchers management experience (so they can see how much they like it & get better at it), and that this pushes in favour of running a large internship with 1-1 management of the interns
        • (This definitely wasn't the only motivation for running the internship, but I think it was one of the main ones, though that's partly guessing/vague memory.)
      • Realising also that maybe RP should offer researchers management training
      • Expanding ops capacity before it's desperately urgently obviously needed
  • RP also just actually does the obvious things, including learning and implementing standard best practices for management, running an org, etc.

And that all seems to me pretty replicable! 

OTOH, I do think the people at RP are also great, and it's often the case that people who are good at something underestimate how hard it is, so maybe this is less replicable than I think. But I'd guess that smart, sensible, altruistic, ambitious people with access to good advisors could have a decent chance at making their org more like that or starting a new org like that, and that this could be quite valuable in expectation.

(If anyone feels like maybe they're such a person and maybe they should do that, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully. 

Some evidence of that is that I have in fact spent probably ~10 hours of my free time over the last few months helping someone work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. Though that was an unusual case, and I'd usually just quickly offer my highest-value input.) 

Charles He @ 2021-11-20T01:11 (+16)

I have private information (e.g. from senior people at Rethink Priorities and former colleagues) that suggests operations ability at RP is unusually high. They say that  Abraham Rowe, COO, is unusually good.

The reason why this comment is useful is that:

  • This high operations ability might be hard to observe from the inside, if you are that person (Rowe) who is really good. Also, high ability operations people may be attracted to a place where things run well and operations is respected. There may be other founder effects from Rowe. This might add nuance to Rowe's comment.
  • It seems possible operations talent was (is) limited or undervalued in EA. Maybe RP's success is related to operations ability (allows management to focus, increases org-wide happiness and confidence).
abrahamrowe @ 2021-11-20T14:06 (+15)

I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that's probably helped us maintain a high quality of output as we grow.

I also think a huge part of RP's success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.

MichaelDickens @ 2021-11-27T00:01 (+7)

To the extent that you think good operations can emerge out of replicable processes rather than singularly talented ops managers, do you think it would be useful to write a longer article about how RP does operations? (Or perhaps you've already written this and I missed it)

abrahamrowe @ 2021-11-29T14:40 (+2)

This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).

Peter Wildeford @ 2021-11-20T02:10 (+15)

I definitely think that we are very lucky to have Abraham working with us. I think another thing is that there are at least three people (Abraham, Marcus, me, and probably other people too if given the chance) each capable of founding and running an organization all focused instead on making just one organization really great and big.

I definitely think having Abraham be able to fully handle operations allows Marcus and me to focus nearly entirely on driving our research quality, which is a good thing. Marcus and I also have clear subfocuses (Marcus does animals and global health / development, whereas I focus on longtermism, surveys, and EA movement building) which allow us to further focus our time specifically on making things great.

MichaelA @ 2021-11-19T13:33 (+6)

This comment sounds like it's partly implying "RP seems to have recently overcome these bottlenecks. How? Does that imply the bottlenecks are in general smaller now than they were then?" I think the situation is more like "The bottlenecks were there back then and still are now. RP was doing unusually well at overcoming the bottlenecks then and still is now."

The rest of this comment says a bit more on that front, but doesn't really directly answer your question. I do have some thoughts that are more like direct answers, but other people at RP are better placed to comment so I'll wait till they do so and then maybe add a couple things. 

(Note that I focus mostly on longtermism and EA meta; maybe I'd say different things if I focused more on other cause areas.)


In late 2020, I was given three quite exciting job offers, and ultimately chose to go with a combo of the offer from RP and the offer from FHI, with Plan A being to then leave FHI after ~1 year to be a full-time RP employee. (I was upfront with everyone about this plan. I can explain the reasoning more if people are interested.)

The single biggest reason I prioritised RP was that I believe the following three things:

  1. "EA indeed seems most constrained by things like 'management capacity' and 'org capacity' (see e.g. the various things linked to from scalably using labor).
  2. I seem well-suited to eventually helping address that via things like doing research management.
  3. RP seems unusually good at bypassing these bottlenecks and scaling fairly rapidly while maintaining high quality standards, and I could help it continue to do so."

I continue to think that those things were true then and still are now (and so still have the same Plan A & turn down other exciting opportunities). 

That said, the picture regarding the bottlenecks is a bit complicated. In brief, I think that: 

  • The EA community overall has made more progress than I expected at increasing things like management capacity, org capacity, available mentorship, ability to scalably use labor, etc. E.g., various research training programs have sprung up, RP has grown substantially, and some other orgs/teams have been created or grown.
  • But the community also gained a lot more "seriously interested" people and a lot more funding.
  • So overall the bottlenecks are still strong in that it still seems quite high-leverage to find better ways of scalably using labor (especially "junior" labor) and money. But it also feels worth recognising that substantial progress has been made and so a bunch more good stuff is being done; there being a given bottleneck is not in itself exactly a bad thing (since it'll basically always be true that something is the main bottleneck), but more a clue about what kind of activities will tend to be most impactful on the current margin.
James Ozden @ 2021-11-16T09:59 (+37)

To what extent do you think a greater number of organisations conducting similar research to RP would be useful to promote healthy dialogue? Compared to having one specialist organisation in a field who is the go-to for certain questions. 

Linch @ 2021-11-19T12:36 (+16)

I'll let Peter/Marcus/others give the organizational answer, but speaking for myself I'm pretty bullish about having more RP-like organizations. I think there are a number of good reasons for having more orgs like RP (or somewhat different from us), and these reasons are stronger at first glance than reasons for consolidation (eg reduced communication overhead, PR). 

  1. The EA movement has a strong appetite for research consultancy work, and RP is far from sufficient for meeting all the needs of the movement. 
  2. RP clones situated slightly differently can be helpful in allowing the EA movement to unlock more talent than RP will be able to.
    1. For example, we are a remote-first/remote-only organization, which in theory means we can hire talent from anywhere. But in practice, many people may prefer working in an in-person org, so an RP clone with a physical location may unlock talent that RP is unable to productively use.
  3. We have a particular hiring bar. It's plausible to me that having a noticeably higher or lower hiring bar can result in a more cost-effective organization than us. 
    1. For example, having a higher hiring bar may allow you to create a small tight-knit group of supergeniuses pursuing ambitious research agendas
    2. Having a lower hiring bar may allow you to take larger chances on untapped EA talent, is maybe better for scalability, and also I have a strong suspicion that a lot of needed research work in EA "just isn't that hard" and if it's done by less competent people, this frees up other EA researchers to do more important work.
  4. More generally, RP has explicitly or implicitly made a number of organizational decisions for how a research org can be set up,  and it's plausible/likely to me that greater experimentation at the movement level will allow different orgs to learn from each other. 
  5. Having RP competitors can help keep us on our toes, and improve quality via the normal good things that come from healthy competition. 
  6. Having an RP competitor can help spot-check us and point out our blindspots.
    1. I'm pretty excited about an EA red-teaming institute, and maybe a good home for it is at RP. But even if it is situated at RP, who watches the watchmen? I think it'd be really good for there to be external checks/red-teaming/evaluation of RP research outputs.
      1. Right now, the only org I trust to do this well is Open Phil. But Open Phil people are very busy, so I'd be really excited to see a different org spring up to red-team and evaluate us.
  7. AFAICT when doing very rough BOTECs on the expected impact of RP's research, the EV of RP work is massively cost-effective (flag: bias). If true, I think there's a very simple economics argument that marginal cost (including opportunity cost) should equal marginal revenue (expected impact), so in theory we should be excited to see many competitors to RP until marginal cost-effectiveness becomes much lower.
MichaelA @ 2021-11-19T13:49 (+13)

also I have a strong suspicion that a lot of needed research work in EA "just isn't that hard" and if it's done by less competent people, this frees up other EA researchers to do more important work.

I agree with that suspicion, especially if we include things like "Just collect a bunch of stuff in one place" or "Just summarise some stuff" as "research". I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: "I’m addicted to creating collections"). It basically always feel like (a) a lot of other people could've done what I'm doing and (b) it's kinda crazy no one had yet. I also sometimes don't have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically "research" outputs than collections or summaries are.

But I want to push back on "this frees up other EA researchers to do more important work". I think you probably mean "this frees up other EA researchers to do work that they're more uniquely suited for"? I think (and your comment seems to imply you agree?) that there's not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required - i.e., many low-hanging fruit remain unplucked despite being rather juicy.

Richenda @ 2021-11-21T18:47 (+11)

Strongly agree with this. While I was working on LEAN and the EA Hub I felt that there were a lot of very necessary and valuable things to do, that nobody wanted to do (or fund) because they seemed too easy. But a lot of value is lost, and important things are undermined if everyone turns their noses up at simple tasks. I'm really glad that since then CEA has significantly built up their local group support. But it's a perennial pitfall to watch out for.

Linch @ 2021-11-22T16:25 (+6)

But I want to push back on "this frees up other EA researchers to do more important work". I think you probably mean "this frees up other EA researchers to do work that they're more uniquely suited for"? I think (and your comment seems to imply you agree?) that there's not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required - i.e., many low-hanging fruit remain unplucked despite being rather juicy.

I think this is probably true. One thing to flag here is people's counterfactuals are not necessarily in research. I think one belief that I recently updated towards but haven't fully incorporated in my decision-making is that for a non-trivial subset of EAs in prominent org positions (particularly STEM-trained  risk-neutral Americans with elite networks), counterfactuals might be more like expected E2G earnings more in the mid-7 figures or so* than the low- to mid- 6 figures I was previously assuming.

*to be clear, almost all of this is EV is in the high upside things, very few people make 7 figures working jobby jobs.

MichaelA @ 2021-11-19T16:52 (+2)

I agree on all points (except the nit-pick in my other comment).

A couple things I'd add:

  • I think this thread could be misread as "Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?" 
    • If that was the question, I wouldn't actually be sure what the best answer would be - I think it'd be necessary to look at the specifics, e.g. what are the other org's specific plans, who are their founders, etc.? 
    • Another tricky question would be something like "Should [specific person] join RP with an eye to helping it scale further, join some org that's not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?"Any of those three options could be best depending on the person and on other specifics.
    • But what I'm more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like.
  • Somewhat relatedly, I'd guess that "reduced communication" and "PR" aren't the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I'm guessing you (Linch) would agree; I'm just aiming to counter a possible inference.)
    • Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there's substantial fixed costs to creating each new org.
    • See also this interesting comment thread.
    • But again, ultimately I do think there should be more new RP-like orgs being started (if started by fitting people with access to good advisors etc.)
MichaelA @ 2021-11-19T16:54 (+5)

One other thing I'd add to Linch's comments, adapting something I wrote in another comment in this AMA:

If anyone feels like maybe they're the right sort of person to (co-)found a new RP-like org, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully. 

Some evidence that I really am keen on this is that I've spent probably ~10 hours of my free time over the last few months helping a particular person work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. (Though that was an unusual case and I'd usually just quickly offer my highest-value input.)

Linch @ 2021-11-18T03:28 (+4)

Quick clarifying question: is 

one specialist organisation in a field

referring to RP, or more field-specific organizations like e.g. CSET or an (AFAIK, hypothetical) organization focused on answering questions on medical approaches to existential biosecurity. 

Put another way, is your question asking about larger RP vs RP + several RP clones, or RP + several RP clones vs. RP + several specialist organizations? 

James Ozden @ 2021-11-18T15:27 (+3)

Thanks for the clarifying question. I meant larger RP vs RP+ several RP clones (basically new EA research orgs that do cause/intervention/strategy prioritisation). 

The case of larger RP vs RP + several specialist organisations is also interesting though - slightly analogous to the scenario of 80K and Animal Advocacy Careers. I wonder in a hypothetical world where 80K was more focused on animal welfare, would/should they defer all animal interested people to AAC as they have greater domain expertise or should they advise some animal people themselves as they bring a slightly different lens to the issue? The relevant comparison might be RP and Wild Animal Initiative for example.

Ben_West @ 2021-11-16T01:24 (+35)

Do you also feel funding constrained in the longtermist portion of your work? (Conventional wisdom is that neartermist causes are more funding constrained than longtermist ones.)

Peter Wildeford @ 2021-11-19T14:15 (+16)

Mostly yes. It definitely is the case that, if we were given more cash than the cash that we already have, we could meaningfully accelerate our longtermism team in a way that we cannot do with the cash we currently have. Thus funding is still an important constraint to scaling our work, in addition to some other important constraints.

However, I am moderately confident that between the existing institutional funders (OpenPhil, Survival and Flourishing Fund, Long-Term Future Fund, Longview, and others) that we could meet a lot of our funding request - we just haven’t asked yet. But (1) it’s not a guarantee that this would go well so we’d still appreciate money from other sources, (2) it would be good to add some diversity from these sources, (3) money from other sources could help us spend less time fundraising and more time accelerating our longtermism plans, (4) more funding sooner could help us expand sooner and with more certainty, and (5) its likely we could still spend more money than these sources would give.

MichaelA @ 2021-11-19T16:30 (+14)

This comment matches my view (perhaps unsurprisingly!). 

One thing I'd add: I think Peter is basically talking about our "Longtermism Department". We also have a "Surveys and EA Movement Research Department". And I feel confident they could do a bunch of additional high-value longtermist work if given more funding. And donors could provide funding restricted to just longtermist survey projects or even just specific longtermist survey projects (either commissioning a specific project or funding a specific idea we already have).

(I feel like I should add a conflict of interest statement that I work at RP, but I guess that should be obvious enough from context! And conversely I should mention that I don't work in the survey department, haven't met them in-person, and decided of my own volition to write this comment because I really do think this seems like probably a good donation target.)

Here are some claims that feed into my conclusion:

  • Funding constraints: My impression is that that department is more funding constrained than the longtermism department
    • (To be clear, I'm not saying the longtermism department isn't at all funding constrained, nor that that single factor guarantees that it's better to fund RP;s survey and EA movement research department than RP's longtermism department.)
  • Skills and comparative advantage:
    • They seem very good at designing, running, and analysing surveys
    • And I think that that work gains more from specialisation/experience/training than one might expect
    • And there aren't many people specialising for being damn good at designing, running, and/or analysing longtermism-relevant surveys
      • I think the only things I'm aware of are RP, GovAI, and maybe a few individuals (e.g., Lucius Caviola, Stefan Schubert, Vael Gates)
        • And I'd guess GovAI wouldn't scale that line of work as rapidly as RP could with funding (though I haven't asked them), and individual people are notably harder to scale...
  • There's good work to be done:
    • We have a bunch of ideas for longtermism-relevant surveys and I think some would be very valuable
      • (I say "some" because some are like rough ideas and I haven't thought in depth about all of them yet)
      • I/we could probably expand on this for potential donors if they were interested
    • I think I could come up with a bunch more exciting longtermism-relevant surveys if I spent more time doing so
    • I expect a bunch of other orgs/stakeholders could as well, at least if we gave them examples, ideas, helped them brainstorm, etc.
PeterSlattery @ 2021-11-17T20:34 (+34)

Assume you had uncapped funding to hire staff at RP from now on. In such a scenario, how many more staff would you expect RP to have in 5 years from now? How much more funding would you expect to attract? Would you sustain your level of impact per dollar? 

For instance, is it the case that you think that RP could be 2x as large in five years and do 3x as much funded work at a 1.5x current impact per dollar? Or a very different trajectory?

I ask as an attempt to gauge your perception of the potential growth of RP and this sector of EA more generally.  

Peter Wildeford @ 2021-11-19T21:52 (+13)

It’s been hard for me to make five year plans, given that we’re currently only a little less than four years old and the growth between 2018 when we started and now has already been very hard to anticipate in advance!

I do think that RP could be 2x as large in five years. I’m actually optimistic that we could double in 2-3 years!

I’m less sure about how much funded work we’d do - actually I’m not sure what you mean by funded work, do you mean work directly commissioned by stakeholders as opposed to us doing work we proactively identify?

I’m also less sure about impact per dollar. We’ve found this to be very difficult to track and quantify precisely. Perhaps as 80,000 Hours talks about “impact-adjusted career changes”, we might want to talk about “impact-adjusted decision changes” - and I’d be keen to generate more of those, even after adjusting for our growth in staff and funding. I think we’ve learned a lot more about how to unlock impact from our work and I think also there will have been more time for our past work to bear fruit.

Linch @ 2021-11-20T02:27 (+8)

One additional point I'll note is that most (though not all ) of our impact comes from having a multiplier effect on the EA movement.  Unlike say a charity distributing bednets, or an academic trying to answer ML questions in AI safety, our impact is inherently tied with the impact of EA overall. So an important way we'll have a greater impact per dollar (without making many changes ourselves) is via the movement growing a lot in quantity, quality, or both. 

Put another way, RP is trying to have a multiplier effect on the EA movement, but multiplication is less valuable than addition if the base is low.

A third way in which we rely on the EA movement (the second one is money) is that almost all of our hires comes from EA, so if EA outreach to research talent dries up (or decreases in quality), we'd have a harder time finding competent hires. 

PeterSlattery @ 2021-11-20T20:09 (+1)

Thanks, that's exciting to hear! 

For funded work, I wanted to know how much funding you expect to receive to do work for stakeholders.  

abrahamrowe @ 2021-11-21T17:23 (+16)

This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.

But in terms of just pure contracted work, in 2021 so far, we've received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.

In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we'd see growth growth in donations and grants too.

Zach Stein-Perlman @ 2021-11-15T22:00 (+31)

How valuable do you think your research to date has been? Which few pieces of your research to date have been highest-impact? What has surprised you or been noteworthy about the impact of your research?

Peter Wildeford @ 2021-11-19T22:03 (+3)

I think we cover this in our 2021 Impact and 2022 Strategy update!

Charles He @ 2021-11-17T02:04 (+30)

By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.

Do you have any insights that explain your success and quality, especially that might inform other organizations or founders?

Alternatively, is your success due to intrinsically high founder quality, which is harder to explain?

Linch @ 2021-11-19T11:26 (+33)

By its reputation, output, and the quality and character of management and staff, Rethink Priorities seems like an extraordinarily good EA org.

Thanks Charles for your unprompted, sincere, honest, and level-headed assessment. 

Your check will be in the mail in 3-7 business days. 

Charles He @ 2021-11-19T21:57 (+2)

Yes, thank you, kind sir.

Marcus_A_Davis @ 2021-11-19T22:15 (+16)

Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:

  • We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
  • We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
  • We try to follow research and management best practices, and gather ideas on these fronts from organizations and leaders that have previously been successful.
  • We try to make RP a genuinely pleasant place to work for everyone on our staff.

As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.

MichaelA @ 2021-11-19T17:21 (+8)

(This other comment of mine is also relevant here, i.e. if answering these questions quickly I'd say roughly what I said there. Also keen to see what other RP people say - I think these are good questions.)

Madhav Malhotra @ 2021-11-17T20:44 (+28)

What are the top 2-3 issues Rethink Priorities is facing that prevent you from achieving your goals? What are you currently doing to work on these issues?

Peter Wildeford @ 2021-11-19T22:08 (+12)

I think to better expand Rethink Priorities, we need Rethink Priorities to be bigger and more efficient.

I think the relevant constraints for "why aren't we bigger?" are:

(1): sufficient number of talented researchers that we can hire

(2): sufficient number of useful research questions we can tackle

(3): ability to ensure each employee has a positive and productive experience (basically, people management constraints and project management constraints)

(4): ops capacity - ensuring our ops team is large enough to support the team

(5): Ops and culture throughput - giving the ops enough time to onboard people (regardless of ops team size), giving people enough time to adapt to the org growth ...that is, even if we were otherwise unconstrained I still think we can't just 10x in one year because that would just feel too ludicrous

(6): proof/traction (to both ourselves and to our external stakeholders/funders) that we are on the right path and "deserve" to scale (this also just takes time)

(7): money to pay for all of the above

~

It doesn't look like (1) or (2) will constrain us anytime soon.

My guess is that (3) is our current most important constraint but that we are working by experimenting with directly hiring managers and by promoting people into management internally. We rolled out management training this summer and also used our internship program, in part, to train management capacity. From a project management perspective, we recently hired a manager and have rolled out Asana across the team and we will continue to focus on the Asana processes we’ve built and make sure they are working before scaling more.

For (4), this will occasionally become a constraint from time-to-time but we solve this by proactively identifying ops bottlenecks and hiring for them well in advance. So far this has gone well.

For (5), I think this will be our next biggest constraint once we solve (3). I think this is best solved just with time to let the current level of growth become normal as well as listening to staff and their concerns. We just launched our biannual staff survey and we are awaiting important staff feedback before hiring more.

For (6), I think also comes with time and probably can be seen in combination with (5).

For (7), I do think we are funding constrained right now - we have room for more funding and definitely need to get money from somewhere in order to continue our work. I’m optimistic that we can get money from our current institutional sources because we haven’t tried too recently to ask them for money and I think they still like us and want us to continue to succeed. But I think, as I’ve mentioned elsewhere, we’d still like other people to support our work to enable us to diversify our funding sources, give us more flexible unrestricted funding that is 1.5x-2x as valuable per dollar to us, and to build us more sustainability / flexibility in the face of idiosyncratic risk.

Sorry that was seven things instead of 2-3, but I think it helps to communicate the full picture.

Madhav Malhotra @ 2021-11-21T23:09 (+1)

This is very well-communicated! Thank you for taking the time to type all that out and label the responses :-)

Regarding (3) - making each employee happy and productive

Are there any examples of organisations that you aspire to model RP's practices after? Ie. Exemplars of how to "be bigger and more efficient" while making each employee happy and productive?

*I ask because I'd love to learn about real-life management  cultures/tools to grow my skillset  :-)

Janique @ 2021-12-08T14:13 (+1)

I've seen Peter, our Co-CEO, highlight Netflix culture as something that inspired him: https://jobs.netflix.com/culture
 

Peter Wildeford @ 2021-12-09T03:10 (+5)

I'd clarify that I was inspired by that particular document - especially for the large employee ownership - but I'm much less inspired by the culture at Netflix as I hear from some employees that it is actually practiced.

Nathan Young @ 2021-11-16T14:45 (+28)

What lessons would you pass onto other EA orgs from running an internship program?

Domi_Krupocin @ 2021-11-19T19:20 (+11)

Thanks so much for this question!

We have learned a lot during our Fellowship/Internship Program. Several main considerations come to mind when thinking about running a fellowship/internship program.

  • Managers’ capacity and preparedness – hosting a fellow/intern may be a rewarding experience. However, working with fellows/interns is also time-consuming. It seems to be important to keep in mind that managers may need to have a dedicated portion of time to:
    • Prepare for their fellows/interns’ arrival, which may include drafting a work plan, thinking about goals for their supervisees, and establishing a plan B, in case something unexpected comes up (for example, data is delayed, and the analysis cannot take place)
    • Explain tasks/projects, help set goals, and brainstorm ideas on how to achieve these goals
    • Regularly meet with their fellows/interns to check in, monitor progress, as well as provide feedback and overall support/guidance throughout the program
    • Help fellows/interns socialize and interact with others to make them feel included, welcomed, and a part of the team/organization.
  • Operations team capacity and preparedness – there are many different tasks associated with each stage of the fellowship/internship program. It’s crucial to ensure that the Operations Team has enough capacity and time to hire, onboard, support, and offboard fellows/interns, especially when the program is open to candidates worldwide. For example, we work with an international employment organization that acts as a proxy employer in each of the countries our staff and fellows/interns are based. Taking into account the amount of coordination needed between international employment organization – staff internally – fellows/interns is important (the amount will vary significantly between adding 2-3 vs. 10 fellows/interns to the team).
  • Internal processes – capacity is one thing, but having strong, internal processes developed beforehand appears to be equally vital. This refers to hiring and candidate selection procedures, establishing reasonable timelines, setting up check-in structures with both fellows/interns and managers, as well as organizing relevant professional development and social opportunities.
  • Hiring internationally and remotely – it may be worth considering where most of the team members are located. If most of the staff are in the US time zones, then it may make sense to think how that could affect candidates from completely different time zones (e.g., Australia and Oceania). Will they be able to communicate with their managers easily? Will they have enough opportunities to interact with other fellows/interns and colleagues?

In summary, any fellowship and internship program may be truly beneficial to the organization running it. Most importantly, however, the questions are how to make the program beneficial to fellows/interns, and how will it impact their future education paths and careers.


 

MichaelA @ 2021-11-20T09:26 (+10)

Two things I'd add to the above answer (which I agree with):

  • RP surveyed both interns and their managers at the end of the program, which provided a bunch of useful takeaways for future internships. (Many of which are detailed or idiosyncratic and so will be useful to us but aren't in the above reply.) I'd say other internship programs should do the same.
    • I'd personally also suggest surveying the interns and maybe managers at the start of the internship to get a "baseline" measure of things like interns' clarity on their career plans and managers' perceived management skills, then asking similar questions at the end, so that you can later see how much the internship program benefitted those things. Of course this should be tailored to the goals of a particular program.
  • What lessons we should pass on to other orgs / research training programs will vary based on the type of org, type of program, cause area focus, and various other details. If someone is actually running or seriously considering running a relevant program and would be interested in lessons from RP's experience, I'd suggest they reach out! I'd be happy to chat, and I imagine other RP people might too.
MichaelA @ 2021-11-19T17:18 (+3)

Good question! Please enjoy me not answering it and instead lightly adapting an email I sent to someone who was interested in running an EA-aligned research training program, since you or people interested in your question might find this a bit useful. (Hopefully someone else from RP will more directly answer the question.)

"Cool that you're interested in doing this kind of project :)

I'd encourage you to join the EA Research Training Program Slack workspace and share your plans and key uncertainties there to get input from other people who are organizing or hoping to organize research training programs. [This is open only to people organizing or seriously considering organizing such programs; readers should message me if they'd like a link.]

You could also perhaps look for people who've introduced themselves there and who it might be especially useful to talk to.

Resources from one of the pinned posts in that Slack:

  • See here for a brief discussion of why I use the term "research training programs" and roughly what I see as being in-scope for that term. (But I'm open to alternative terms or scopes.)
  • See here for a collection of EA Forum posts relevant to research training programs.
  • See here for a spreadsheet listing all EA-aligned research training programs I'm aware of.

You might also find these things useful: 

I'd also encourage you to seriously consider applying for funding, doing so sooner than you might by default, and maybe even applying for a small amount of funding to pay for your time further planning this stuff (if that'd be helpful). Basically, I think people underestimate the extent to which EA Funds are ok with unpolished applications, with discussing and advising on ideas with applicants after the application is submitted, and with providing "planning grants". (I haven't read anything about your plans and so am not saying I'm confident you'll get funding, but applying is very often worthwhile in expectation.) More info here:

[...] ...caveat to all of that is that I know very little about your specific plans - this is basically all just the stuff I think it's generically worth me mentioning to people interested in EA running research training programs.

Best of luck with the planning, and feel free to send through specific questions where I could perhaps be useful :) 

Best,

Michael"

Zach Stein-Perlman @ 2021-11-15T22:00 (+28)

Why do you have the distribution of focus on health/development vs animals vs longtermism vs meta-stuff that you do? How do you feel about it? What might make you change this distribution, or add or remove priority areas?

Marcus_A_Davis @ 2021-11-19T22:13 (+8)

Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.

I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:

  1. What we think about research opportunities in each space
  2. What we think about the opportunity to exert meaningful influence in the space
  3. Funding opportunities
  4. Our ability to hire people

In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provided the good opportunities arise to do so. We do have opinions on high level cause prioritization (though I know there’s some disagreement inside RP about this topic) but I think given the changing nature of marginal value of additional work in any given the above considerations, and others, we meld our work (and staff) to where we think we can have the highest impact.

In general, though this is fairly generic and high level, were we to come to think our in a given area wasn’t useful or the opportunity cost were too high to continue to work on it, we would decide to pursue other things. Similarly, if the reverse was true for some particular possible projects we weren’t working on, we would take them on

Zach Stein-Perlman @ 2021-11-20T00:00 (+4)

Thanks for your reply. I think (1) and (2) are doing a ton of work — they largely determine whether expected marginal research is astronomically important or not. So I'll ask a more pointed follow-up:

Why does RP think it has reason to spend significant resources on both shorttermist and longtermist issues (or is this misleading; e.g., do all of your unrestricted funds go to just one)? What are your "opinions on high level cause prioritization" and the "disagreement inside RP about this topic"? What would make RP focus more exclusively on either short-term or long-term issues?

MichaelA @ 2021-11-20T09:48 (+8)

[This is not at all an organizational view; just some thoughts from me]

tl;dr: I think mostly RP is able to grow in multiple areas at once without there being strong tradeoffs between them (for reasons including that RP is good at scaling & that the pools of funding and talent for each cause area are somewhat different). And I'm glad it's done so, since I'd guess that may have contributed to RP starting and scaling up the longtermism department (even though naively I'd now prefer RP be more longtermist).

I think RP is unusually good at scaling, at being a modular collection of somewhat disconnected departments focusing on quite different things and each growing and doing great stuff, and at meeting the specific needs of actors making big decisions (especially EA funders; note that RP also does well at other kinds of work, but this type of work is where RP seems most unusual in EA). 

Given that, it could well make sense for RP to be somewhat agnostic between the major EA causes, since it can meet major needs in each, and adding each department doesn't very strongly trade off against expanding other departments. 

(I'd guess there's at least some tradeoff, but it's possible there's none or that it's on-net complementary; e.g. there are some cases where people liking our work in one area helped us get funding or hires for another area, and having lots of staff with many areas of expertise in the same org can be useful for getting feedback etc. One thing to bear in mind here is that, as noted elsewhere in this AMA, there's a lot of funding and "junior talent" theoretically available in EA and RP seems unusually good at combining these things to produce solid outputs.)

I would personally like RP to focus much more exclusively on longtermism. And sometimes I feel a vague pull to advocate for that. But RP's more cause-neutral, partly demand-driven approach has worked out very well from my perspective so far, in that it may have contributed to RP moving into longtermism and then scaling up that team substantially.[1] (I mean that from my perspective this is very good for the world, not just that it let me get a cool job.) So I think I should endorse that overall decision procedure.

This feels kind-of related to moral trade and maybe kind-of to the veil of ignorance

That's not to say that I think we shouldn't think at all about what areas are really most important in general, what's most important on the current margin within EA, where our comparative advantage is, etc. I know we think at least somewhat about those things (though I'm mostly involved in decisions about the longtermism department rather than broader org strategy so I don't bother trying to learn the details). But I think maybe the tradeoffs between growing each area are smaller than one might guess from the outside, such that that sort of high-level internal cause area priority-setting is somewhat less important than one might've guessed.

This doesn't really directly answer your question, since I think Peter and Marcus are better placed to do so and since I've already written a lot on this semi-tangent...

[1] My understanding (I only joined in late 2020) is that for a brief period at its very beginning, RP had no longtermist work (I think it was just global health & dev and animals?). Later, it had longtermism as just a small fraction of its work (1 researcher). RP only made multiple hires in this area in late 2020, after already having had substantial successes in other areas. At that point, it would've been unsurprising if people at the org thought they should just go all-in on their existing areas rather than branching out into longtermism. But they instead kept adding additional areas, including longtermism. And now the longtermism team is likely to expand quite substantially, which again could've been not done if the org was focusing more exclusively on its initial main focus areas. 

kyle_fish @ 2021-11-15T17:13 (+27)

What is your process for identifying and prioritizing new research questions? And what percentage of your work is going toward internal top priorities vs. commissioned projects?

MichaelA @ 2021-11-19T17:30 (+15)

[This is like commentary on your second question, not a direct answer; I'll let someone else at RP provide that.]

Small point: I personally find it useful to make the following three-part distinction, rather than your two-part distinction:

  • Academia-like: Projects that we think would be valuable although we don't have a very explicit theory of change tied to specific (types of) decisions by specific (types of) actors; more like "This question/topic seems probably important somehow, and more clarity on it would probably somehow inform various important decisions."
    • E.g., the sort of work Nick Bostrom does
  • Think-tank-like: Projects that we think would be valuable based on pretty explicit theories of change, ideally informed by actually talking to a bunch of relevant decision-makers to get a sense of what their needs and confusions are.
  • Consultancy-like: Projects that one specific stakeholder (or I guess maybe one group of coordinated stakeholders) have explicitly requested we do (usually but not necessarily also paying the researchers to do it).

I think RP, the EA community, and the world at large should very obviously have substantial amounts of each of those three types of projects / theory of change.

I think RP specialises mostly for the latter two models, whereas (for example) FHI specialises more for the first model and sometimes the second. (But again, I'll let someone else at RP say more about specific percentages and rationales.)

(See also my slides on Theory of Change in Research, esp. slide 17.)

James Smith @ 2021-11-15T21:54 (+26)

Is there any particular reason why biosecurity isn't a major focus? As far as I can see from the list, no staff work on it, which surprises me a little. 

Linch @ 2021-11-19T22:38 (+16)

The short answer is that a) none of our past hires in longtermism (including management) had substantive  biosecurity experience or biosecurity interest and b) no major stakeholder has asked us to look into biosecurity issues.

The extended answer is pretty complicated. I will first go into why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity, explain why I think those reasons aren't as applicable to RP, and then why we haven't gone into biosecurity anyway.

Why generalist EA orgs or generalist independent researchers may find it hard to go into biosecurity

My personal impression is that EA/existential biosecurity experts currently believe that it's very easy for newcomers in the field to do more harm than good, especially if they do not have senior supervision from someone in the field. This is because existential biosecurity in particular is rife with information hazards, and individual unilateral actions can invoke the unilateralist's curse

Further, all the senior biosecurity people are very busy, and are not really willing to take the chance with someone new unless they a) have experience (usually academic) in adjacent fields or b) are credibly committed to do biosecurity work for a long period of time if they're a good fit. 

Since most promising candidates are understandably not excited to commit to doing biosecurity work for a long period of time without doing some work on it first, this creates a chicken-and-egg problem.

(Note again this is my own impression. Feel free to correct me, any biosecurity experts reading this!)

Why RP in particular may be a good place to start a biosecurity career anyway.

I think RP is institutionally trusted by the major groups enough to be careful if we were able to wade into biosecurity. In particular, we would be careful to not publish things that we think are potentially dangerous without running it by a few more experienced people first, and also we are credibly very willing to take down things quickly if we get a "cease and desist" from more experienced parties first (and then carefully reassess offline whether this was the correct move to do). 

On an individual level, I have a number of contacts with some of the key biosecurity people in EA, both through covid forecasting before joining RP, and socially.  In addition, I believe I can credibly pull off "non-expert making useful and not-dangerous contributions to biosecurity" as my covid forecasting and cultured meat analysis experiences have at least somewhat demonstrated an ability to provide value via reading, disseminating, and evaluating fairly technical work in adjacent domains (as a non-expert). 

So I'd maybe be excited to do biosecurity projects within my range of capabilities if stakeholders reached out to us with sufficiently important/interesting projects, or (more plausibly) advise colleagues/interns/contractors who can provide enough technical expertise while I provide the less technical guidance.

Why we haven't gone into biosecurity anyway

As you may have already inferred from past sentences, the biggest reason* is that none of our hires have had biosecurity experience or even strong interest. This is another chicken-and-egg problem. We haven't done biosecurity work because we don't have strong biosecurity hires, but we don't have strong (enough) biosecurity candidates applying because they don't see us doing biosecurity work. 

One of my planned ways around this was trying to get a biosecurity intern last summer, in the hopes that having public outputs in biosecurity by an intern would be a smooth way for us to both scale up our institutional biosecurity knowledge and also demonstrate our interest in this arena. The idea is that interns with the relevant backgrounds (eg math bio, or epidemiology) can provide the technical backgrounds while RP complements their skillsets with the relevant EA contacts, discretion, and analytical ability. 

I did try nontrivially hard to make this happen smoothly. I asked some promising biosecurity people to apply.I got verbal agreement from some FHI bio people to be a co-advisor to our biosecurity-interested interns if we had any. And some of the questions in our (blinded) intern assessment process should have differentially been easier for people with bio backgrounds.

But ultimately our strongest intern candidates last round neither had the relevant academic backgrounds nor were particularly interested in biosecurity. 

Next steps

RP's longtermism team is currently going through a hiring round. It seems plausible we might just have a strong biosecurity hire this round, in which case they'd lead our future biosecurity efforts in 2022 and this discussion is moot.

It also seems plausible to me if unlikely (~20% in the next 6 months?) for us to end up prioritizing biosecurity even without a strong biosecurity hire, whether due to internal cause prioritization or external stakeholder requests.  

At any rate, if you or others reading this want to support future RP biosecurity efforts, the best way to do this is encouraging strong biosecurity people you know to apply in future rounds! Funder interest is also helpful, but substantially less so.

*we also have internal disagreements about whether it makes sense for us to be more proactive about doing biosecurity work, a) given that we're already scattered pretty thin on many projects, b) focus is often good, and c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway. I'm just presenting my own view.

MichaelA @ 2021-11-20T10:35 (+4)

That all sounds basically right to me, except that my impression is that the cruxes in internal (mild) disagreements about this are just about "a) given that we're already scattered pretty thin on many projects, b) focus is often good" and not "c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway". 

Or at least, I personally think I see (a) and (b) as some of the strongest arguments against us doing biosecurity stuff, while I'm roughly agnostic on (c) but I'd guess that there are some high-value things RP could do even if we lack technical backgrounds, and if some more senior biosecurity person said they really wanted us to do some project then I'd probably guess that they're right that we could be very useful on that. 

(And to be clear, my bottom line would still be pretty similar to Linch's, in that if we get a person who seems a strong fit for biosecurity work, they seem especially interested in that, and some senior people in that area seem excited about us doing something in that area, I'd be very open to us doing that.)

Nathan Young @ 2021-11-16T14:45 (+24)

What is your comparative advantage?

Linch @ 2021-11-20T03:27 (+9)

As much as I like to imagine it's my own work (in longtermism), I think the clearest institutional comparative advantage of RP relative to the rest of the EA movement is the quality of our animal-welfare focused research. To the best of my knowledge, if you want to focus on doing research that directly improves the welfare of many animals, and you don't have a  long-chain theory/plan of impact (e,g. by shifting norms in academia or having an influential governmental position), RP's the best place to do this. This is just my impression, but my guess is that this is broadly shared among animal-focused EAs. 

The main exception I could think of is Open Phil, but they're not hiring.

I also get the impression that our survey team is very good, probably the best in EA, but I have less of an inside view here than for the animal welfare research. 

Our longtermism and global health work are comparatively more junior and less proven, in addition to having fairly stiff competition.

Peter Wildeford @ 2021-11-19T22:54 (+5)

Research, especially EA-aligned research done based on an explicit theory of change.

MichaelA @ 2021-11-20T09:50 (+2)

I'd also note things about scaling (as mentioned elsewhere in the AMA)

NunoSempere @ 2021-11-17T19:38 (+4)

Asked differently, why are you so cool, both at the RP level and personally?

Nathan Young @ 2021-11-18T00:18 (+2)

That's very kind of you to say Nuno.

NunoSempere @ 2021-11-18T10:58 (+4)

Surprising, I know

Madhav Malhotra @ 2021-11-17T20:50 (+23)

What have you been intentional about prioritising in the workplace culture at Rethink Priorities? If you focus on making it a great place for people to work, how do you do that? 

Domi_Krupocin @ 2021-11-19T21:24 (+13)

This is a great question! Thank you so much!

At Rethink Priorities we take an employee-focused approach. We do our best to ensure that our staff have relevant tools and resources to do their best work, while also having enough flexibility to maintain their work-life balance. Staff happiness is a high priority for us and one of our strategic goals. 

Some aspects of our employee-centered approach include:

  • Competitive benefits and perks – we offer unlimited time off, flexible work schedule, professional development opportunities, stipends etc., which are available to full- and part-time staff, as well as our fellows/interns.
  • Opportunities to socialize, make decisions, and take on new projects – for example, we have monthly social meetings, we run random polls to solicit opinions/ideas from staff, and create opportunities for employees to participate in various initiatives, like leading a workshop.
  • Biannual all staff surveys – we collect feedback from our staff twice a year. The survey asks a series of questions about leadership, management, organizational culture, benefits and compensation, psychological safety, amongst others. The results are thoroughly analyzed and guide our decisions about how to improve our culture, moving forward.
  • Positive environment – we foster an inclusive and welcoming environment in which we encourage individuals to pose their questions, provide feedback, share thoughts, and raise concerns; additionally, we practice transparency at RP with regards to all aspects of our operations (e.g., decision-making, salary).
  • Internal processes – we continuously revise and/or develop internal processes and practices to ensure equity across the entire organization (e.g. we have recently audited our hiring procedures to increase equity and reduce bias when selecting candidates). 
  • Reflection – we reflect on how we do our work, how we interact with one another, what culture we aspire to develop, and implement necessary changes.
Madhav Malhotra @ 2021-11-21T23:22 (+1)

I really appreciate your structured response :-) Would you happen to have any documents about the actionables behind each of these? Like this handbook at Valve? :D

*I ask because I'd be curious to learn about the actionable tips that others can replicate from your experience :-)

Peter Wildeford @ 2021-11-19T22:55 (+12)

We’re working right now on a values and culture setting exercise where we are figuring out intentionally what we like about our culture and what we want to specifically keep. I appreciate Dominika's comment but I want to add a bit more of what is coming out of this (though it isn't finished yet).

Four things I think are important about our culture that I like and try to intentionally cultivate:

Work-life balance and sustainability in our work. Lots of our problems are important and very pressing and it is easy to burn yourself out working hard on it. We have deliberately tried to design our culture for sustainability. Sure, you might get some more hours of work this year if you work harder but it isn’t worth burning out just a few years later. We want our researchers here for the long haul. We’re invested in their long-term productivity.

Rigor and calibration. It’s very easy to do research poorly and unfortunately easy to do bad research that misleads people because it is hard to see how the research is bad. Thus a lot of work must be done by our researchers to ensure that our work is accurate and useful.

Ownership. In a lot of organizations, managers want their employees to do exactly what they are told and follow processes to the letter. At Rethink Priorities, we think the ideal employee instead seeks to understand the motivation behind the assignment and how it fits into our goals and notices if there is a better way to achieve the same goals or even if the project shouldn’t be done.

Working on the right things. There are a lot of problems that we need to solve, so we must prioritize them. Selecting the right research question can often be more impactful than answering it.

We'll have something more finished at a later date!

Madhav Malhotra @ 2021-11-21T23:25 (+2)

Your work-life balance and ownership points remind me of the culture at Valve!

Here are some notes I took on their culture if you'd be interested in ideas to implement. The points highlighted in orange are the actionables to implement :-) 

James Smith @ 2021-11-15T21:56 (+23)

What kinds of research questions do you think are better answered in an organisation like RP vs. in academia, and vice versa? 

David_Moss @ 2021-11-19T18:53 (+16)

One major factor that makes some research questions more suited to academia is requiring technical or logistical resources that would be hard to access or deploy in a generalist EA org like RP (some specialist expertise also sometimes falls into this category). Much WAW research is like this, in that I don't think it makes sense for RP to be trying to run large-scale ecological field studies.

Another major factor is if you want to promote wider field-building or you want the research to be persuasive as advocacy to certain audiences in the way that sometimes only academic research can. This also applies to much WAW research.

Personally, I think in a most other cases academia is typically not the best venue for EA research, although the latter considerations about field-building and the prestige/persuasiveness of academic research recurs sufficiently commonly that I think the question of whether a given project is worth publishing academically recurs fairly commonly even within RP.

James Smith @ 2021-11-23T10:59 (+1)

Thanks a lot for the response - can I just ask what WAW stands for? Google is only showing me  writing about writing, which doesn't seem likely to be it...

And how often does RP decide to go ahead with publishing academia? 

David_Moss @ 2021-11-23T13:29 (+6)

can I just ask what WAW stands for? Google is only showing me  writing about writing, which doesn't seem likely to be it...

 

"WAW" = Wild Animal Welfare (previously often referred to as "WAS" for Wild Animal Suffering).

And how often does RP decide to go ahead with publishing academia? 

I'd say a small minority of our projects (<10%).

PeterSlattery @ 2021-11-17T20:39 (+22)

Are there any ways that the EA community can help RP that we might not be aware of? Or any that we do already that you would like more of?  

Linch @ 2021-11-20T01:07 (+12)

Commenting on our public output, particularly if you have specialized technical expertise, can often be somewhere from mildly to really helpful. RP has a lot of knowledge, but so does the rest of the EA community and extended EA network, so if you can route our reports to the relevant connections, this can be really valuable in improving the quality of our reasoning and epistemics.

Janique @ 2021-11-19T15:07 (+10)

One thing the EA community can help us with is by encouraging  suitable candidates to apply to our jobs. (New ones will be posted here and announced in our newsletter.) Some of our most recent hires have transitioned from fields which, at first sight, would seem unlikely to produce typical applicants. But we're open to anyone proving us they can do the job during the application process (we do blinded skills assessments). I think we're really not credentialist (i.e. we don't care much about formal degress if people have gained the skills that we're looking for). So whenever you read a job ad and think "Oh, this friend could actually do that job!", do tell them to apply if they're interested.

More importantly, I think EA community builders in all geographies and fields can greatly help us by training people to become good at the type of reasoning that's important in EA jobs. I particularly think of reasoning transparency,  expressing degrees of (un)certainty and clarifying the epistemic status of what you write. Furthermore, probabilistic thinking and  Bayesian updating. Also learning to build models and getting familiar with tools like Guesstimate and Causal. Forecasting also seems to be a valuable skill to train (e.g. on Metaculus). I think EAs anywhere in the world can set up groups where people train such skills together. 

MichaelA @ 2021-11-19T17:52 (+8)

I like this answer. 

Some additional possible ideas:

  • Letting us know about or connecting us to stakeholders who could use our work to make better decisions
    • E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact)
  • Letting us know if you have areas of expertise that are relevant to our work and you'd be willing to review draft reports and/or have conversations with us
  • Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc. 
  • Letting us know if there are projects you think it might be very valuable for us to do
    • We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it's plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas. 
  • Testing & building fit for research management
  • Testing & building fit for ops roles
  • Donating

(In all cases, I mean either doing this thing yourself or encouraging other people to do so.)

Madhav Malhotra @ 2021-11-17T20:49 (+20)

To any staff brave enough to answer :D 

You're fired tomorrow and replaced by someone more effective than you. What do they do that you're not doing?

MichaelA @ 2021-11-19T18:07 (+18)

I recently spent ~2 hours reflecting on RP's longtermism department's wins, mistakes, and lessons learned from our first year[1] and possible visions for 2022. I'll lightly adapt the "lessons learned for Michael specifically" part of that into a comment here, since it seems relevant to what you're trying to get at here; I guess a more effective person in my role would match my current strengths but also already be nailing all the following things. (I guess hopefully within a year I'll ~match that description myself.)

(Bear in mind that this wasn't originally written for public consumption,  skips over my "wins", etc.)

  • "Focus more
    • Concrete implications:
      • Probably leave FHI (or effectively scale down to 0-0.1 FTE) and turn down EA Infrastructure Fund guest manager extension (if offered it)
      • Say no to side things more often
      • Start fewer posts, or abandon more posts faster so I can get other ones done
      • Do 80/20 versions of stuff more often
      • Work on getting more efficient at e.g. reviewing docs
    • Reasons:
      • To more consistently finish things and to higher standards (rather than having a higher number of unfinished or lower quality things)
      • And to mitigate possible stress on my part, [personal thing], and to leave more room for things like exercise
      • And to be more robust against personal life stuff or whatever
        • (I mean something like: My current slow-ish progress on my main tasks is even with working parts of each weekend, so if e.g. I had to suddenly fly back to Australia because a family member was seriously ill, I’d end up dropping various balls I’ve somewhat committed to not dropping.)
  • Maybe trust my initial excitement less regarding what projects/posts to pour time into and what ideas to promote, and relatedly put more effort into learning from the views and thinking of more senior people with good judgement and domain expertise 
    • E.g., focus decently hard on making the AI gov stuff go well, since that involves doing stuff Luke thinks is useful and learning from Luke
    • E.g., it was good that I didn't bother to finish and post my research question database proposal
  • Maybe pay more attention to scale and relatedly to whether an important decision-maker is likely to actually act on this
    • Some people really do have a good chance of acting in very big ways on some stuff I could do
    • But by default I might not factor that into my decisions enough, instead just being helpful to whoever is in front of me or pursuing whatever ideas seem good to me and maybe would get karma
  • Implement standard productivity advice more, or at least try it out
    • I’ll break this down more in the habits part of my template for meetings with Peter
    • [I'm also now trying productivity coaching]
  • Spend less time planning projects in detail, and be more aware things will change in unexpected ways
  • Be more realistic when making plans, predictions, and timelines
    • (No, really)
    • Including assuming management will take more time than expected, at least given how I currently do it
  • Spend more time, and get better at, forming and expressing hot takes
  • Spend less time/words comprehensively listing ideas/considerations/whatever
  • More often organise posts/docs conceptually or at least by importance rather than alphabetically or not at all
  • Be more strict with myself regarding exercise and bedtime
  • Indeed optimise a fair bit for research management careers rather than pure research careers
    • This was already my guess when I joined, but I’ve become more confident about it"

[1] I mean the first year of the current version of RP's longtermism department; Luisa Rodriguez previously did (very cool!) longtermism work at RP, but then there was a gap between her leaving (as a staff member; she's now on the board) and the current staff joining. 

Madhav Malhotra @ 2021-11-21T23:32 (+1)

Thank you for being vulnerable enough to share this! 

It sounds like you're focusing a lot on working on the right things (and by extension, fewer things)? And then becoming more efficient at the underlying skills (ex: explaining, writing, etc.)  involved?

MichaelA @ 2021-11-22T08:51 (+3)

Yeah, though I’m also aiming to work on fewer things as “a goal in itself”, not just as a byproduct of slicing off the things that are less important or less my comparative advantage. This is because more focus seems useful on order to become really excellent at a set of things, ensure I more regularly actually finish things, and reduce the inefficiencies caused by frequent task/context-switching.

Linch @ 2021-11-20T00:36 (+13)

Some ways someone can be more effective than me:

  • I'm not as aggressive at problem/question/cause prioritization as I could be. I can see improvements of 50-500% for someone who's (humanly) better at this than me.
  • I'm not great at day-to-day time management either. I can see ~100% improvement in that regard if somebody is very good at this.
  • I find it psychologically very hard to do real work for >30h/week, so somebody with my exact skillset but who could productively work for >40h/week without diminishing returns would be >33% more valuable.
  • I pride myself of the speed and quantity I write, but I'm slower than eg MichaelA, and I think it's very plausible that a lot of my outputs are still bottlenecked by writing speed. 10-50% effectiveness improvement seems about right.
  • I don't have perfect mental health and I'm sometimes emotional. (I do think I'm above average at both). I can see improvements of 5-25% for people who don't have these issues.
  • I'm good at math* but not stellar at it. I can imagine someone who's e.g. a Putnam Fellow be 3-25% more effective than me if they chose to work on the same problems I work on (though plausibly they'd be more effective because they'd gravitate towards much mathier problems; otoh ofc not all/most mathy problems are very important)
  • Relatedly, obviously I'm not the smartest person in the world. I don't have a good sense of how much e.g. being half a standard deviation smarter than me would make someone a better researcher, anything from "not a lot" to "very high" seems plausible to me. ??? for quantitatively how much effectiveness this adds.

*Concretely, I did a math major in a non-elite liberal arts college, which wasn't too hard for me. I perceived both my interns last summer as probably noticeably better at math than me (one was a math major at Columbia and the other at MIT). Certainly they know way more math.

Madhav Malhotra @ 2021-11-21T23:37 (+1)

Thank you for the specific estimates and the wide variety of factors you considered :-) It may be that @MichaelA is also working primarily on improving cause prioritisation. I guess maybe you've both discussed that :D

Jason Schukraft @ 2021-11-19T16:22 (+13)

The person who replaces me has all my same skills but in addition has many connections to policymakers, more management experience, and stronger quantitative abilities than I do.

Holly_Elmore @ 2021-11-19T18:33 (+11)

I've adjusted imperfectly to working from home, so anyone  who has that strength in addition to my strengths would be better. I wish I knew more forecasting and modeling, too. 

Linch @ 2021-11-19T11:38 (+7)

(less helpful answer, will think of a better one later) 

Hmm Rethink follows pretty reasonable management practices, and is maybe on the conservative side for things like firing unproductive employees. 

You're fired tomorrow...

So I can't really imagine being fired for ineffectiveness without warning  on a Saturday. The only way this really happens is if I'm credibly accused  of committing a pretty large crime or sexually harassing a RP colleague or maybe faking data or something like that. 

To the best of my knowledge I have not done these things.

...and replaced by someone more effective than you. What do they do that you're not doing?

Hmm since I haven't done these things, I must be set up to be falsely accused for a crime in a credible way. So the most likely way someone can replace me and be more effective on this dimension is by not making any enemies who's motivated enough to want to set them up for murder or something.

Linch @ 2021-11-18T03:24 (+2)

Quick clarifying question: 

Is the most important part of your question the "fired" part or the "more effective" part? Like would you rather I a) answer by generating stories of how I might be fired and how somebody can avoid that, or b) answer what can people do to be more effective than me?

Madhav Malhotra @ 2021-11-18T12:01 (+1)

Part b) is more important. Part a) is just to make the question more real to the person answering.

PeterSlattery @ 2021-11-17T20:09 (+18)

Are there any skills and/or content expertise that you expect to particularly want from future hires? Put differently, is there anything that you think aspiring hires might want to start working on to be better suited to join/support RP over the next few years?

Linch @ 2021-11-19T12:55 (+5)

Put differently, is there anything that you think aspiring hires might want to start working on to be better suited to join/support RP over the next few years?

I'll let my colleagues answer the object-level question/might answer it myself later if I get better ideas later, but broadly I would somewhat caution against having a multi-year plan to be employed at Rethink Priorities specifically (or at any specific organization). RP hiring is pretty competitive now and has gotten more competitive over time[1], and also our hiring processes are far from perfect so even very good researchers (by our lights) may well be missed by our hiring process. 

That said, some of the answers to James Ozden's question might be relevant here as well. 

[1] We're also scaling pretty quickly to hire more people, but EA community building/recruitment at top universities have also really scaled up since 2020, and it's unclear how these things shake out in terms of how competitive our applications will be in a few years.

MichaelA @ 2021-11-19T18:12 (+11)

I agree, but would want to clarify that many people should still apply and very many people should at least consider applying. It's just that people shouldn't optimise very strongly for getting hired by one specific institution that's smaller than, say, "the US government" (which, for now, we are 😭).

Linch @ 2021-11-19T21:11 (+5)

Thanks for the clarification! Definitely encourage people to apply.

We've also moved paid work trials to earlier and earlier on in the process, so hopefully applying is not a financial hardship for people.

PeterSlattery @ 2021-11-17T19:59 (+18)

What percentage of your work/funding comes from non-EA aligned sources? 

Linch @ 2021-11-20T01:43 (+13)

I once told people in a programmer group chat what I was doing when I got my new job at RP. One of them looked into the website and gave like a $10 donation. 

To the best of my limited knowledge, this might well be our largest non-EA aligned donation in longtermism. 

abrahamrowe @ 2021-11-19T13:26 (+10)

It's a little hard to say because we don't necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It's varied by year (we've received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).

One tricky part of separating this out - there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.

Linch @ 2021-11-18T02:25 (+15)

Minor nit:

You can see all of our work to date here.

should be 

You can see all of our completed public work to date here.

As discussed in this comment thread (by you :P), an increasingly high percentage of our work is targeted towards specific decision-makers, and whether we choose to publish is due to a combination of researcher interest, decision-maker priorities, and the object-level of what the research entails.

David_Moss @ 2021-11-19T18:32 (+15)

I'm particularly glad you  note this since the survey team's research in particular is almost exclusively non-public research (basically the EA Survey and EA Groups Survey are the only projects we publish on the Forum), so people understandably get a very skewed impression of what we do.

James Ozden @ 2021-11-20T00:00 (+4)

If you can share, what are some other projects or research that the survey team works on? If you can't give specifics, it would be useful to know broadly what they were related to.  I'm intrigued by the mystery!

David_Moss @ 2021-11-20T14:44 (+14)

Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks. 

Most of the projects aren't exactly top secret, but I err on the side of not  mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them to estimate how many US college students have heard of EA.

An illustrative example of the kind of project a lot of these are would be an org approaching us saying they are considering doing some outreach (this could be for any cause area) and wanting us to run a study (or studies) to assess what kind of message would be most appropriate. Another common type of project is just polling support for different policies of interest and  testing the robustness of these results with different approaches. Both these kinds of projects are the most common but generally take up proportionately less time.

There are definitely a lot of other things that we can do and have done. For example the 'survey' team has also used focus groups before and would be interested in doing so again (which we think would be useful for a lot of EA purposes), and much of David Reinstein's work is better described as behavioural experiments (usually field experiments), rather than surveys. 

Another aspect of our work that has increased a lot recently to a degree that was slightly surprising is what Peter refers to here as "ad hoc analysis requests" and consulting (e.g. on analysis and survey design), without us actually running a full project ourselves. I'd say we've provided services like this to 8-9 different orgs/researchers (sometimes taking no more than a couple of hours, sometimes taking multiple days) in the last few weeks alone. As Peter mentions in that post, these can be challenging from a fund-raising perspective, although I strongly encourage people not to not reach out to us on that basis. 

The projects we did used to be more FAW leaning, but over time the composition has changed a bit and, perhaps unsurprisingly, now contains more longtermist projects. Because the things we work on are pretty responsive to requests coming from other orgs, the cause-composition can change unexpectedly in a short space of time. Right now the projects we're working on are roughly evenly split between animals, movement building and meta, but it wouldn't be that surprising if it became majority longtermism over the next 6 months.

Peter Wildeford @ 2021-11-19T19:27 (+4)

Thanks! We'll make sure to get this changed going forward.

Madhav Malhotra @ 2021-11-17T20:42 (+15)

In your past experiences, what are the biggest barriers to getting your research in front of governmental  organisations? (ex: official development aid grantmakers or policy-makers)

Biggest barriers in getting them to act on it?

Neil_Dullaghan @ 2021-11-19T15:19 (+18)

I would break this down into a) the methods for getting research in front of government orgs and b) the types of research that gets put in front of them.

In general I think we (me for sure) haven’t been optimising for this enough to even know the barriers (unknown unknowns). I think historically we’ve been mostly focused on foundations and direct work groups, and less on government and academia. This is changing so I expect us to learn a lot more going forward.

As for known unknowns in the methods, I still don’t know who to actually send my research to in various government agencies, what contact method they respond best to (email, personal contact, public consultations, cold calling, constituency office hours?), or what format they respond best to (a 1 page PDF with graphs, a video, bullet points, an in person meeting? - though this public guide Emily Grundy made on UK submissions while at RP has helped me). Anecdotally it seems remarkably easy to get in front of some: I know of one small animal advocacy organization that managed to get a meeting with the Prime Minister of their country, and I myself have had 1-1 meetings with more than two dozen members of the UK and Irish parliaments and United Nations & European Union bureaucrats (non RP work) with relative ease (e.g. an email with a prestigious sounding letterhead).

My assumption is government orgs are swamped with requests and petitions from NGOs, industry, peers, constituents. So we need some way to stand out from the crowd like representing a core constituency of theirs, being recommended to them by someone they deem credible such as an already established NGO, being affiliated with some already credible institution like a prestigious university, and proving to them we can provide them with policy expertise and legislative intelligence better than most others can.

On b) I think have a better sense of what content would be more likely to get in front of them. Niel Bowerman had some good insights on this in 2014, and the “legislative subsidy” approach Matthew Yglesias favours in the US context seems useful.There was an interesting study from Nakajima (2021) (twitter thread) which looked at what kinds of research evidence do policymakers prefer (bigger samples, external validity extends to the population in their jurisdictions, no preference on observational-v-experimental) so I think we can explore whether the topics on our research agenda fit within those designs.

Update: wanted to add in this post from Zach Groff:

  1. Happily, evidence does seem to affect policy, but in a diffuse and indirect way. The aforementioned researcher Carol Weiss finds that large majorities (65%-89%) of policymakers report being influenced by research in their work, and roughly half of them strongly (Weiss 1980; Weiss 1977). It's rare that policymakers pick up a study and implement an intervention directly. Instead, officials gradually work evidence into their worldviews as part of a gradual process of what Weiss calls "enlightenment" (Weiss 1995). Evidence also influences policy in more political but potentially still benign ways by justifying existing policies, warning of problems, suggesting new policies or making policymakers appear self-critical (Weiss 1995; Weiss 1979; Weiss 1977).
  1. There are a few methods that seem to successfully promote evidence-based policy in health care, education, and government settings where they have been tested. The top interventions are:

2a) Education—Workshops, courses, mentorship, and review processes change decision makers' behavior with regard to science in a few studies (Coburn et al. 2009; Matias 2017; Forman-Hoffman et al. 2017; Chinman et al. 2017; Hodder et al. 2017).

2b) Organizational structural changes—If an organization has evidence built into its structure, such as having a research division and hotline, encouraging and reviewing employees based on their engagement with research, and providing funding based on explicit evidence, this seems to improve the use of evidence in the organization (Coburn and Turner 2011; Coburn 2003; Coburn et al. 2009; Weiss 1980; Weiss 1995; Wilson et al. 2017; Salbach et al. 2017; Forman-Hoffman et al. 2017; Chinman et al. 2017; Hodder et al. 2017). A few other methods for promoting research-backed policies seem promising based on a bit less evidence:

2c) Increasing awareness of evidence-based policy—Sending employees reminders or newsletters seems to increase research-based medicine based on two high-quality review papers (Murthy et al. 2012; Grimshaw et al. 2012). . Similarly, all-around advocacy campaigns to promote evidence-based practices among practitioners achieves substantial changes in one randomized controlled trial (Schneider et al. 2017).

2d) Access—Merely giving people evidence on effectiveness does not generally affect behavior, but when combined with efforts to motivate use of the evidence, providing access to research does improve evidence-based practice (Chinman et al. 2017; Wilson et al. 2017).

2e) External motivation and professional identities— Two recent RCTs and a number of reviews and qualitative research find that rewarding people for using evidence and building professional standards around using research are helpful (Chinman et al. 2017; Schneider et al. 2017;Hodder et al. 2017; Forman-Hoffman et al. 2017; Weiss et al. 2005; Weiss 1995; Wilson et al. 2017; Weiss 1980; Weiss 1977; Matias 2017; Coburn 2005; Coburn 2003).

  1. Interestingly, a few methods to promote evidence-based practices that policymakers and researchers often promote do not have much support in the literature. The first is building collaboration between policymakers and researchers, and the second is creating more research in line with policymakers' needs One of the highest-quality write-ups on evidence-based policy, Langer et al. 2016 finds that collaboration only works if it is deliberately structured to build policymakers' and researchers' skills. When it comes to making research more practical for policymakers, it seems that when policymakers and researchers work together to come up with research that is more relevant to policy, it has little impact. This may be because, as noted in point (1), research seems to influence policy in important but indirect ways, so making it more direct may not help much.
  1. There is surprisingly and disappointingly little research on policymakers' cognition and judgment in general. The best research is familiar to the effective altruism community from Philip Tetlock (1985; 1994; 2005; 2010; 2014; 2016) and Barbara Mellers (2015), and it gives little information on how decision-makers respond to scientific evidence, but suggests that they are not very accurate at making predictions in general. Other research indicates that extremists are particularly prone to overconfidence and oversimplification, and conservatives somewhat more prone to these errors than liberals (Ortoleva and Snowberg 2015; Blomberg and Harrington 2000; Kahan 2017; Tetlock 1984; Tetlock 2000). Otherwise, a little research suggests that policymakers in general are susceptible to the same cognitive biases that affect everyone, particularly loss aversion, which may make policymakers irrationally unwilling to end ineffective programs or start proven but novel ones (Levy 2003; McDermott 2004). On the whole, little psychological research studies how policymakers react to new information.

If anyone reading this works at a governmental organization, we’d love to chat!

Richenda @ 2021-11-21T19:22 (+3)

@Neil_Dullaghan we should chat.

Madhav Malhotra @ 2021-11-23T16:45 (+1)

Thank you for the well-researched response :-) Excited to maybe ask again in a year and see any changes in your practical lessons!

davija10. @ 2021-11-16T14:22 (+15)

In your yearly review you mention that Rethink may significantly expand its Longtermism research group in the future, including potentially into new focus areas and topics. Do you have any ideas of what these might be (beyond the mentioned AI governance), and how you might choose (i.e. looking for a niche where Rethink can play a major role, following demand of stakeholders, etc.)?

If in 5 and/or 10 years time you look back on RP and feel its been a major success, what would that look like? What kind(s) of impact would you consider important, and by what bar would you measure your attainment/progress towards that?

Peter Wildeford @ 2021-11-19T23:01 (+5)

The first part I answered here.

I think a major success for us would look like having achieved a large and sustainably productive research organization tackling research in a variety of disciplines and cause areas. I think we will have made a major contribution to unlocking funding in effective altruism by figuring out to fund with more confidence as well as increasing our influence across a larger variety of stakeholders, including important stakeholders outside of the effective altruism movement..

Nathan Young @ 2021-11-16T00:15 (+15)

How have you or would you like to experiment with your organisational structure or internal decision making to improve your outputs?

Peter Wildeford @ 2021-11-19T22:59 (+7)

One recent experiment has been trying to get better at project management, especially at a larger scale. We’ve rolled out Asana for the entire organization and have hired a project manager.

Another recent experiment has been whether we can directly hire for “Senior Research Managers” (SRMs), instead of having to develop all our senior research talent in-house. We’ve hired two external SRMs and it has been going well so far, but it is too early to tell. We may try to hire another external SRM in our current hiring process.

If both these two experiments go well, it will unlock a lot of future scalability for our organization and for other organizations that can follow suit.

Our next experiment will likely involve hiring research and/or executive assistants to see if they can help our existing researchers achieve more productivity in a more sustainable way.

James Ozden @ 2021-11-16T10:04 (+13)

Any advice for researchers who want to conduct research similar to Rethink Priorities? or useful resources that you point your researchers towards when they join?

Neil_Dullaghan @ 2021-11-19T11:26 (+11)

It has been said before elsewhere by Peter, but worth stating again:read and practice Reasoning Transparency . Michael Aird compiled some great resources recently here.

I'd also refer people to Michael and Saulius' replies to arushigupta's similar subquestion in last year's RP AMA.

MichaelA @ 2021-11-19T18:20 (+8)

One thing I'd add is that I think several people at RP and elsewhere would be very excited if someone could:

  1. Find existing resources that work as good training for improving one's reasoning transparency, and/or
  2. Create such a resource

As far as I'm aware, currently the state of the art is "Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don't really (since it's hard!), hope they absorb that, repeat." I.e., the state of the art is kinda crappy. (I think Luke's post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.) 

I don't know exactly what sort of resources would be best, but I imagine we could do better than what we have now. 

MichaelA @ 2021-11-19T18:41 (+7)

Oh, and some other resources I'd often point people towards after they join are:

Linch @ 2021-11-19T13:27 (+10)

For longtermist work, I often point people to Holden Karnofsky's impressions on career choice, particularly the section on building aptitudes for conceptual and empirical research on core longtermist topics .

I've also personally gained a lot from arguing with People Wrong on the Internet, but poor application of this principle may be generally bad for epistemic rigor. In particular, I think it probably helps to have a research blog and be able to do things like spot potential holes in (EA social media, EA forum, research blogs, papers, etc). That said, I think most EA researchers (including my colleagues) are much less Online than I am, so you definitely don't need to develop an internet argument habit to be a good researcher.

Making lots of falsifiable forecasts about short-term conclusions of your beliefs may be helpful. Calibration training is probably less helpful, but lower cost.

Trying to identify important and tractable (sub)questions is often even more important than the ability to answer them well. In particular, very early on in a research project, try to track "what if I answered this question perfectly? Does it even matter? Will this meaningfully impact anyone's decisions, including my own? Will this research build towards something else that will meaningfully impact decisions later?"

"Politely disagreeable" seems like a pretty important disposition. You benefit epistemically from being nice and open enough to other people's ideas that you a) deliberately seek out contrarian opinions and b) don't reject them outright, but also you need to be disagreeable enough that you in general shouldn't update on beliefs just because other (smart, respected, experienced, etc) people confidently believe it. 

Being very aggressively truth-seeking is a really important disposition. My belief is that most people are by default bad at this, including people who may otherwise make great EA researchers.

I also endorse Neil's comment.

Madhav Malhotra @ 2021-11-17T20:48 (+12)

Let's say your research directly determined the allocation of $X of funding in 2021. 

Let's say you have to grow that amount by 10 times in 2022, but keep the same number of staff, funding, and other resources.  

What would you change first in your current campaigns, internal operations, etc.?

Peter Wildeford @ 2021-11-19T22:59 (+7)

I don’t think it is actually possible to 10x our impact with the same staff, funding, and other resources - hence our desire to hire and fundraise more. If it was possible, we’d certainly try to do that!

The best answer I can think of is Goodharting - we certainly could influence more total dollars if we cared less about the quality of our influence and the quality of those dollars. We also could exaggerate our claims about what “influence” means, taking credit for decisions that likely would’ve been made the same anyway.

Nathan Young @ 2021-11-16T00:15 (+11)

What are the bottlenecks to using forecasting better in your research?

MichaelA @ 2021-11-19T18:31 (+12)

Lazy semi-tangential reply: I recently gave a presentation that was partly about how I've used forecasting in my nuclear risk research and how I think forecasting could be better used in research. Here are the slides and here's the video. Slides 12-15 / minutes 20-30 are most relevant. 

I also plan to, in ~1 or 2 months, write and publish a post with meta-level takeaways from the sprawling series of projects I ended up doing in collaboration with Metaculus, which will have further thoughts relevant to your question.

(Also keen to see answers from other people at RP.)

Peter Wildeford @ 2021-11-19T23:00 (+7)

We at Rethink Priorities definitely have made an increasingly large effort to include forecasting in our work. In particular, we just recently have been running a large Nuclear Risks Tournament on Metaculus. My guess is that the reasons we don’t have even more forecasting relates to not all of our researchers being experienced forecasters and it hasn’t been a sufficient priority to generate sufficiently useful and decision-relevant forecasting questions for every research piece.

GrueEmerald @ 2021-11-16T20:14 (+9)

Will you have some kind of internship/fellowship oppurtunities next summer?

Peter Wildeford @ 2021-11-19T15:00 (+7)

We have not yet decided whether we will have internships / fellowships this summer - assuming you are referring to the Northern Hemisphere here. If we launch these internships, I imagine they will open in 2022 March. We are continuing to consider launching internships / fellowships for summer in each Hemisphere (as we launched an AI Governance and Fellowship for 2022 Jan-March for summer in the Southern Hemisphere).

Another thing we are considering in addition to, or in replacement of, internships this year is Research/Executive Assistant positions that focus more on supporting and learning the work of a particular researcher on the RP team. These roles would likely be permanent/indefinite in length rather than a few months like our internships have been.

Oscar Delaney @ 2021-11-18T06:02 (+5)

I am also interested in future internship plans.  Specifically, how flexible are the dates and time commitments?

As someone based in Australia, seasonal descriptors (presumably from the Northern hemisphere) aren't ideal though I can convert them - specific months would be preferable :)  Also our university holiday periods are different, so I will need to work around that too.

James Ozden @ 2021-11-16T09:56 (+9)

What are some key research directions/topics that are not currently being looked into enough by the EA movement (either at all or in sufficient depth)?

Holly_Elmore @ 2021-11-19T18:41 (+8)

Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee,  and when we were able to but didn't, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don't see EA exploiting it enough. 

MichaelA @ 2021-11-19T18:34 (+8)

As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren't written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links. 

(I'm not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.)

People might also find my central directory for open research questions useful, but that's not filtered for my own beliefs about how important-on-the-margin these questions are.

James Ozden @ 2021-11-16T10:01 (+8)

Interesting that you've got climate change in your global health and development work rather than with longtermism. What are the research plans for the climate change work at RP?

Peter Wildeford @ 2021-11-19T23:00 (+9)

A note on why climate change is currently in our global health and development work rather than longtermism - the main reasons for this is that while we could consider longtermist work on climate change we do not think marginal longtermist climate change work makes sense for us relative to the importance and tractability of other longtermist work we could do. However, global health and development funders and actors are also interested in climate change in a way that does not funge much against longtermist money or talent, and the burden of climate change is felt heavily on lower and middle income countries. Therefore we think climate change work makes sense to explore relative to other global health and development opportunities.

Jason Schukraft @ 2021-11-19T16:21 (+9)

Hi James, thanks for your question. The climate change work currently on our research calendar includes:

  1. A look at how climate damages are accounted for in various integrated assessment models
  2. A cost effectiveness analysis of anti-deforestation interventions
  3. A review of the landscape of climate change philanthropy
  4. An analysis of how scalable different carbon offsetting programs are
Tom Hird @ 2021-11-19T11:45 (+6)

I'm interested in your current and future work on longtermism. 

One of your plans for 2022 is to:

Have you decided the possible additional research directions you are hoping to explore? When you're figuring this out, are you more interested in spotting gaps or do you feel the field in young enough that investigating areas others are working on/have touched is still likely to be beneficial? Perhaps both!

Peter Wildeford @ 2021-11-19T23:01 (+7)

One thing we know for certain is that we are definitely doing AI Governance and Strategy work. We have not decided these other avenues yet - I think we will decide them in large part based on who we hire for our roles and in consulting with the people we hire once they are hired and come to agreements as a team. I definitely think that there is a lot to contribute in every field, but we will weigh neglectedness and our comparative advantage in figuring out what to work on.

MichaelA @ 2021-11-20T10:44 (+4)

I expect we'll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it'd be most useful to do and the pros and cons of various avenues we might pursue. 

(We sort-of passively do this in an ongoing way, and I've been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we'd probably ramp it up when choosing directions for next year. I'm saying "I think" because the longtermism department haven't yet done our major end-of-year reflection and next-year planning.)

Jack Cunningham @ 2021-11-18T15:50 (+4)

What should one do now if one wants to be hired by Rethink Priorities in the next couple years? Especially in entry-level or more junior roles.

I realize this is a general question; you can answer in general terms, or specify per role.

MichaelA @ 2021-11-19T18:42 (+2)

James Ozden's question above might be sufficiently similar to yours that the answers there address your question?

ImmaSix @ 2021-11-19T11:48 (+3)

From a talk at EAG in 2019, I remembered that your approach could be summarized as empirical research in neglected areas (please correct me if I'm wrong here). Is this still the case? Do you still have a focus on empirical research (Over, say, philosophy)?

Peter Wildeford @ 2021-11-19T23:01 (+7)

Yes, it is still our approach, broadly speaking, to focus on empirical research, though certainly not to the exclusion of philosophy research. And we’ve now done a lot of research that combines both, such as our published work on invertebrate sentience and our forthcoming work on the relative moral weight of different animals.

ImmaSix @ 2021-11-19T11:39 (+2)

About funding overhang:

Peter wrote a comment on a recent post:

I'm optimistic we will unlock new sources of needed funding (Rethink Priorities is working a ton on this) so we should expect the current funding overhang to be temporary, thus making it important to still have future donors ready / have large amounts of money saved up ready to deploy.

You also wrote in your plans for 2022:

Help solve the funding overhang in EA and unlock tons of impact by identifying interventions across cause areas that can take lots of money while still meeting a high bar for cost-effectiveness.

In which cause areas do you expect to identify the most funding opportunities? Will the funding gaps be big enough to resolve a significant part of the funding overhang?

Peter Wildeford @ 2021-11-19T14:48 (+12)

We'd expect to find new funding opportunities in each cause area we work in. Our work is aspirational and inherently about exploring the unknown though, so it's very difficult to know in advance how large the funding gaps we uncover will be. But hopefully our work will contribute to a part of work that overall shifts EA from not having a funding overhang but instead having substantial room for more funding in all cause areas. This will be a multi-year journey.

Lovkush @ 2021-11-19T11:05 (+1)

Sorry if the answer for this is readily available elsewhere, but are there recommended times of the year to donate if you are based in the UK, e.g. to make use of matching opportunities? My understanding is that the Giving Tuesday facebook matching is only for US donors.

Thanks!

Janique @ 2021-11-19T14:35 (+3)

Thanks for considering to support us! 

Basically anyone can donate to the Giving Tuesday fundraiser and participate, but only for US donors it's tax-deductible.

From the EA Giving Tuesday FAQ:
>Donors from a large number of countries are eligible to donate through Facebook and get matched. However, in both 2019 and 2020 most non-U.S. donors faced significantly lower donation limits. We expect the same to be true in 2021. [This year, the donation limit for US donors is USD 20,000.] Additionally, please be aware that donors outside the United States will likely lose out on any tax benefits they’d receive from donating to a nonprofit registered in their own country.

International donors can give to RP through the EA Funds. As a UK donor, your gift is eligible for Gift Aid and would typically be tax-deductible. We explain all of this on our donation page: https://rethinkpriorities.org/donate

Regarding other matching opportunities: Check out https://www.every.org/rethink-priorities. They still seems to have some funds available from their FallGivingChallenge for a 100% match!

We don't regularly run matching campaigns ourselves, but it's not excluded we may  set up one in the course of the next year. 
The best way to stay informed about upcoming opportunities is our newsletter
Your gift is welcome at any time of the year!