Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps

By kierangreig🔸 @ 2022-11-25T05:37 (+108)

Key Takeaways and Introduction

Rethink Priorities’ mission is: 

We address global priorities by researching solutions and strategies, mobilizing resources, and empowering our team and others.

Our vision is: 

All humans and nonhumans can flourish to their full potential, and we achieve existential security.

Over the course of 2022, Rethink Priorities (RP) will have spent ~$7.5M USD[1] and worked on ~60 different research pieces, with ~33% completed under a consultancy model.[2] All of our published research can be found here.[3] During 2022 we also hired 32 new team members for a total of 58 permanent staff,[4]corresponding to 55 full-time equivalents.[5] The time distribution for our current research areas this year totaled[6] 36% of time spent working on research relevant to animal welfare, 36% on longtermism, 17% on global health and development, and 10% on surveys and EA movement research.[7]

The remainder of this post:

2022 Impact

Some Background Context and Our Theory of Change

For most of RP’s history,[11] we’ve aimed to achieve impact mainly by helping grantmakers and on-the-ground organizations to improve their decision-making and, in turn, significantly increase the impact of their work. After doing so for a few years, we have come to further recognize both:

  1. Although research related bottlenecks are often significant, in a number of cases a different factor most limits impact. For some of the most promising opportunities, there simply don’t exist adequate options to absorb aligned resources.
  2. RP has the organizational capacity, knowledge, and ability to secure funding to initiate promising projects in these areas. 

So, to continue driving progress in global priorities, RP has become increasingly interested in identifying new promising projects that could absorb a significant amount of resources, and in incubating or otherwise supporting these initiatives to launch and develop.[12] 

In addition to supporting these special projects, we have also expanded both our personnel—particularly in recent times—and our research agenda. Namely, within this past year, we’ve expanded our work addressing global health and development, AI governance and strategy, and other longtermist issues. We’ve also further differentiated our approach according to the context of each global priority in order to act on the specific outstanding impact opportunities in a given area. As we now work across a number of quite different areas, that differentiation in practice means we now perform a wide combination of both analysis and actions. Given this approach, and our relatively recent scaling in some areas, RP did much more consultancy-like research this year. Much of our commissioned work in 2022—particularly in Global Health and Development and AI Governance and Strategy—was not published. Furthermore, we think much of our longtermist work this year could be impactful via informing policy, and perhaps also through informing others’ research and career decisions.

As a result of all the above, across the global priority areas that we work on, RP is a combination of all the following:

To put it simply, our general theory of change is as follows:

2022 Impact Summary

In 2022, RP has achieved many milestones and accomplishments across the global priority areas in which we work. We summarize them across departments in this section (and provide a fuller list in the appendix). Afterwards, we will discuss uncertainties regarding the impact of our work, and note how we are working to better evaluate this and incorporate our findings into RP’s strategy going forward.

Animal Welfare 

Notable accomplishments from the Animal Welfare department include:

Global Health and Development 

In terms of research, the Global Health and Development department completed 15 projects on topics such as mental health, climate change solutions, medicine in the developing world, and more (please see the appendix for a fuller list). Twelve of these projects were commissioned by, and submitted to Open Phil and the two remaining reports were produced in response to other major funders. We are aiming to publish at least five of the reports prepared for Open Phil on the EA Forum by the end of the year. The Global Health and Development team also expanded significantly in 2022, adding six staff (three research assistants, two managers, and one researcher.)

Longtermism

The Longtermism department gradually grew from a headcount of three at the start of 2022 to a total of 19 (including research assistants, as well as temporary fellows, and temporary contractors) at the time of writing. This growth was intended not only to expand our capacity to create and share impactful research products/insights but also to improve the expected future impact of the people hired (e.g. helping fellows gain knowledge, skills, and connections that increase their ability to secure and excel in future roles). We intend to further evaluate next year how well we’ve served as a talent pipeline.

Longtermism staff are also working on many outputs expected to become public in the coming months (e.g. a survey on intermediate goals for AI governance which we co-developed with Luke Muehlhauser of Open Phil, have distributed, and are producing a writeup about). Finally, the team has produced or are working on some outputs expected to remain nonpublic, for reasons such as information hazards or the projects having been quickly undertaken for a specific purpose (such that adapting them for publication isn’t worthwhile).

Beyond contributing to talent pipelines and producing research, accomplishments include:

Surveys and EA Movement Research 

In terms of public reports, this department published a post on the percentage of United States residents who had heard about EA, and whether their impressions were favorable. As in past years, RP once again led on and launched the EA Survey. Most of the Survey Team’s work in 2022, however, involved using its skills to respond to requests to help other EA organizations. Unfortunately, much of this work tends to be confidential. In the last 12 months we have completed more than 15 substantial paid commissions from core EA and longtermist organizations, including:

In addition, we’ve supported various orgs, EA decision-makers and the broader community with a large number of pro bono requests, including:

Some Initial Reflections on Our 2022 Impact

In this section we[15] briefly:

Over 2022, we worked on approximately 60 different research pieces. Of those, ~33% were completed under a consultancy model,[16] with Open Phil as our main client,[17] and a number of other projects also importantly sought impact via influencing Open Phil. That means, across the organization this year, influencing Open Phil was the most frequently pursued impact channel. We think that the current percentage of consultancy-type projects at RP is acceptable.[18] But, as we further develop within certain areas (e.g. global health and development), pending sufficient interest, we will likely want to continue that consultancy work while also venturing further into other impact channels, such as through progressing an independent research agenda. We would also like to identify more key stakeholders that we can work closely with, and further build our relationships with them.

In some areas we are more established than in others. Particularly, our Animal Welfare department seems to have greater volume and variety of outputs. Some of our most important reflections related to its impact include:

The greater volume and variety of outputs from the Animal Welfare department seems importantly caused by the longer time since initially scaling in the area,[21] compared to say, the Longtermism and Global Health and Development departments, which we have only really initially scaled in this past year. In these newer areas, we tend to be more dependent on impact through certain channels than others,[22] and are generally keen to further work on some new impact pathways.[23] Still, we feel we’ve already had some promising traction in these newer areas.[24] And despite being newer to them, we think that in the years to come we could play a quite important role in their evolution.   

We are relatively satisfied to have worked or consulted with more than 20 organizations on high-impact projects, and are excited to continue to scale that collaborative work.[25] We encourage those interested in RP consulting on, or working on, or hosting some high-impact project, to please contact us.

We are also generally satisfied with our other newer efforts to further help build out the EA ecosystem.[26] Importantly, that includes, new to us this year, our hosting or significantly helping host four different external-facing events or retreats, significantly including (in our view) key forums for effective animal advocacy, and AI governance and strategy. We feel that these events went well, and will likely pursue further iterations (possibly in different areas, too).[27] In addition to the seven organizations or projects that we provided fiscal sponsorship services or incubation to, we have also had tens of expressions of interest from other projects and we expect to offer support for more new projects over the next year.

We continue to be internally driven by believing that “good research is not enough” and try to update our strategy to ensure our analyses actually lead to actions. As still mainly a research organization though, we’re usually a step or two removed from direct work, and, thus, it can be challenging to determine what impact we’re having on the world. We’re very interested in ascertaining how those in a position to implement are acting on our work, if at all, and we’re committed to tracking our impact in multiple ways. We are particularly interested in conducting more ex ante and ex post analyses to estimate our impact, and the return on investment of our work, and then taking the time to externally communicate these.[28] This analysis could include surveys and/or interviews of stakeholders[29] as well as case studies. We have now done some more internal work on these assessments, but we would like to put more time and effort into these before potentially communicating views more publicly.[30] 

Not captured in any of the above, but nevertheless quite important inlaying the groundwork for future impact, is that we have made significant progress on a number of internal operational items this year. Some items include improving processes for onboarding new hires, submitting new funding proposals, keeping others up-to-date with our work, and publishing pieces. We also created further documentation related to our governance (and will be looking to expand our board over the next year) and established a committee focused on justice, equity, inclusion, and diversity within the organization. We have also done more work in formalizing our strategy internally this year, and we’ll turn to that now.  

2023 Strategy   

Our Rationale for Working on Various Cause Areas

Our work now addresses animal welfare, artificial intelligence, climate change, global health and development, investigations of worldviews, longtermism, and EA movement research. Next year, we will continue to work on all of these global priorities (to varying extents) because our leadership, on aggregate, believes:[31] 

Our North Stars 

Across all the global priority areas that we are active in, we also have the following overarching “north stars” that outline fundamental dimensions of our currently intended direction:[32]

Some Key Medium-Term Considerations

There are four related key considerations we have identified over the medium term for our organization:

Our Values

The values RP will attempt to epitomize are:

Hopefully that all gives a reasonably good sense of our core strategy for the coming year! However, as mentioned previously, we often try to differentiate our strategic approach across areas in order to best match specific impact opportunities within them. Given that, we’ll briefly expand on area-specific strategies now.

Area-Specific Strategies

Animal Welfare:

Global Health and Development:

Longtermism:

Surveys and EA Movement Research:

Worldview Investigations:

Funding Gaps Through Year-End 2022 and 2023

RP’s most urgent funding need is for further unrestricted donations,[36] which help ensure we have the greatest ability to direct funds to where they can be most effective and that we can react quickly to new opportunities that arise. We have sometimes had the greatest impact when we had this flexibility to readily explore new options that weren’t easy to find funders for at the time,[37] and feel most comfortable in pursuing those opportunities when we have significant funding that is not restricted to specific projects. Flexible funds also allow us to incubate new ideas to the point of proof of concept prior to attempting to justify them to funders, as well as explore ideas that don’t end up working out. We will allocate resources across (and within) global priorities based on stakeholder demand and preferences, RP leadership’s beliefs, our internal growth capacity, as well as the landscape of the respective areas.[38]

However, given that we’re often asked about our current funding needs in each of the areas in which we work, we have included estimates in the below table. Please note these are rough and approximate point estimates which use a number of simplifying assumptions to estimate current funding gaps under three different growth scenarios: i) No-Growth, ii) Moderate-Growth, and iii) High-Growth.[39] We report results for both funding gaps through both year-end 2022, and year-end 2023. To be clear, the revenue goals across the organization for these growth scenarios encompass maintaining 18 months of runway within each area at the end of those respective years. These models also don’t include funding towards, or hires for any special project or incubated project. The main revenue model for our special projects is for those initiatives to fundraise independently and for RP’s Special Projects team’s costs to be covered as line items within the budgets of the special projects themselves.[40] 

Further information regarding the growth scenario conditions (with the exception of Worldview Investigations[41]) and our confidence in meeting them follows:

Given the assumptions used to reach our funding gap estimates, we do want to be clear that these estimates are imperfect. We still list them here because we think they represent useful albeit rough approximations of the amount of funding that we are very highly confident, highly confident, and moderately confident we could absorb over the next year, and that they correspond to reasonable bounds on our core revenue goals until the end of this year and next.

(Recall that this is a rough estimate and is based on aiming to have 18 months of reserves at the end of 2022.)

Funding GapsAnimal WelfareLongtermismGlobal Health and DevelopmentEA Movement Research / SurveysWorldview InvestigationsTotal
No-Growth Scenario$1.4M$0.7M[44]$1.8M$1.2M$0.21M$5.3M
Moderate-Growth Scenario$2.3M$1.8M$2.4M$1.6M$0.34M$8.3M
High-Growth Scenario$3.7M$3.4M$3.2M$2.1M$0.55M$13.0M

 

(Recall that this is a rough estimate and is based on aiming to have 18 months of reserves at the end of 2023.)

Funding GapsAnimal WelfareLongtermismGlobal Health and DevelopmentEA Movement Research / SurveysWorldview InvestigationsTotal
No-Growth Scenario$3.7M$3.4M$3.2M$2.1M$0.34M$12.7M
Moderate-Growth Scenario$5.8M$5.8M$3.9M$3.4M$0.95M$20.0M
High-Growth Scenario$8.2M$8.6M$5.2M$4.1M$2.0M$28.5M

In actuality, our growth will probably look like some combination of these scenarios across the different areas, but it really does depend on how fundraising goes. We’d be happy to discuss the details of how each of these budget levels, either across the organization or within a specific area, would unfold with funders upon request.[45] 

Factors Affecting Our Growth

First, we want to emphasize that as we grow, we plan to even further address factors that could become growth constraints.[46] Still, it could be worth us expanding on our confidence that we can further grow effectively here. To do so, in the following table we consider what in our view are the eight factors that could constrain our growth. Note that here we use “a constraint on growth” to mean something that would cause us to pause our growth plans because we need to address this factor before returning to them. For each factor, we report our subjective credences for some weighted-average chance across the entire organization[47] that each is a constraint in the various growth scenarios considered, and outline our reasoning as to why that is the case. Afterwards, we offer a brief summary across the factors affecting our growth, and for the ones with the greatest likelihood, comment on the severity of their consequences if realized.  

FactorWill this be a constraint in the considered growth scenarios?[48]Our reasoning
1) There’s a sufficient amount of important work.

Very unlikely to be a constraint in the no-growth scenario: ~1% 

 

Very unlikely to be a constraint in the moderate-growth scenario: ~2%

 

Very unlikely to be a constraint in the high- growth scenario: ~3%[49]

Our current research agendas have more promising work than we currently have capacity to answer. Moreover, we find that answering a question on our research agenda often produces more questions and further avenues to work. We expect this situation to continue for at least the period of time considered here.
2) We do sufficient prioritization to ensure that we're tackling important work.  

Pretty unlikely to be a constraint in the no-growth scenario: ~10%

 

Pretty unlikely to be a constraint in the moderate-growth scenario: ~12%

 

Pretty unlikely to be a constraint in the high-growth scenario: ~15%  

This year we have done more to clarify and formalize our strategies across the organization as well as departments within it, and we have also brought on an executive staff to significantly focus on this. We believe we also have a good prioritization track record, but prioritization is difficult, and it will remain so in the future. However, in our opinion so far we have done well, and don’t see clear reasons to change that assessment for the future.
3) We find sufficient numbers of highly skilled individuals for us to hire.

Very unlikely to be a constraint in the no-growth scenario: ~1%

 

Very unlikely to be a constraint in the moderate-growth scenario: ~7%

 

Pretty unlikely to be a constraint in the high-growth scenario: ~25%

We previously predicted there might be a harder limit here, but from assessing the candidate pool thoroughly and going through a big hiring round this year, we are very confident that there’s further talent out there. By offering competitive remuneration and benefits, we think we will be able to continue to attract this talent.

 

We are less sure that there are currently adequate numbers of high-performers within the candidate pool in order to satisfy the high-growth scenario. This would depend somewhat on the department area.

4) We have sufficient people/project management capacity to get and keep our staff working effectively on important projects within important areas.

Pretty unlikely to be a constraint in the no growth scenario: ~10%

 

Pretty unlikely to be a constraint in the moderate-growth scenario: ~20%

 

Unlikely to be a constraint in the high-growth scenario: ~35%[50]

 

Project management has been sufficient for our purposes so far, and we project that at least continuing. We have also planned improvements on project management processes (e.g. improving categorizing and updating on projects through implementing certain organization-wide Asana practices).

We currently have a sufficient number of people managers on staff. For moderate growth, we wouldn’t expect really large increases in management staff but would still expect some increase in their number. For high growth, we would need to seek further managers either through external hires (which we’ve already had some examples of and seem to have been successful with so far, but still somewhat early to say) and/or internal promotions (perhaps identifying and upskilling people who’d be good for this via having potential managers manage fellows). We’d also actively look for ways to offer even further support for new managers in order to maintain operational efficiency.

5) We have sufficient operations capacity. 

Very unlikely to be a constraint in the no-growth scenario: ~2%

 

Very unlikely to be a constraint in the moderate-growth scenario: ~4%

 

Very unlikely to be a constraint in the high-growth scenario: ~8%

We are focused on having strong operations, and will continue to proactively prioritize hiring operations staff. Even within the high-growth scenario, we are quite confident we could frontload a number of operations hires to manage the organization’s overall growth.  
6) We have sufficient funds to pay for all of the above.0% chance (assumed in all these growth scenarios)(Assumed in these growth scenarios).
7) We have sufficient throughput[51] for new staff 

Very unlikely to be a constraint in the no-growth scenario: ~1%

 

Very unlikely to be a constraint in the no-growth scenario: ~4%

 

Pretty unlikely to be a constraint in the high-growth scenario:~12%

In the no- or moderate-growth scenarios we very likely wouldn’t be hiring enough personnel for this to emerge. Under the high-growth scenario, it would be challenging to hire ~30-40 people in each of the next two years, but it seems like we did it this year. Appropriately staggering hires and frontloading operations hires should largely mitigate this factor even in the high-growth scenario. Throughput is the main constraint for why the “high growth” scenario cannot be even higher (conditional on receiving funds).
8) We have sufficient organizational culture and morale so that existing staff feel comfortable with the growth

Very unlikely to be a constraint in the no-growth scenario: ~6%

 

Pretty unlikely to be a constraint in the moderate-growth scenario: ~11%

 

Pretty unlikely to be a constraint in the high-growth scenario: ~20%

We believe we have a good grasp of the organizational culture and morale, and  work on it as needed, and that, crucially, no critical issues have been reported. We also have communication pathways in order to detect things in relatively early stages. And we are agile enough to then address them while in early stages in a relatively quick yet still positive manner.[52] Scaling so far has produced only minor issues and we’ve tripled our size over the past two years. Again, placing an emphasis on hiring a large number of operations staff should also help mitigate the risk here.  

To briefly summarize, of the eight factors we have identified that could affect our ability to accommodate different rates of growth:

Some Reasons to Consider Funding Rethink Priorities

How to Give

We believe we are entering 2023 prepared to do more important research than ever before, and with the ability to continue scaling. We are excited about where we could go with your support.

If you’d like to help fund our work, you can donate directly to us here. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers. If you have questions about donation opportunities, please email or book a meeting with our Director of Development, Janique Behman.

Appendix

For interested readers, here’s a fuller list of milestones and accomplishments across the global priority areas that we were active in this year:[58]

Animal Welfare 

Global Health and Development 

Longtermism 

Note that both of our Longtermism teams (i.e. the General Longtermism team and the AI Governance and Strategy team) essentially came into existence this year. As such, much of their work is in-progress and/or will remain non-public due to confidentiality or information hazard considerations, and hence, in many cases, isn’t mentioned here. Also, this section mentions some work that staff members completed and that’s relevant to our work even if it wasn’t necessarily an RP project or done using RP hours.

AI Governance and Strategy (AIGS) team

We set up a team to study AI governance and strategy (starting around Q4 2021) and grew it to 12 people (including Fellows, contractors, and a Research Assistant).

Our team members’ public or easily explainable outputs so far include the following:

Ongoing projects include the following: (Note: this list isn’t comprehensive and some of these will soon result in public outputs.)

In collaboration with the Special Projects team, the AI Governance and Strategy team also supported and advised Epoch, a promising new organization forecasting the development of advanced AI.

General Longtermism team

We set up a team that was initially dedicated primarily to doing the research and other work necessary to facilitate faster and better creation of longtermist megaprojects—projects that we believe have a decent shot of reducing existential risk at scale (spending hundreds of millions of dollars per year). This was done in large part to work with the FTX Future Fund. Now that the fund no longer exists, we are looking for other funders who would be interested in this work. We are considering continuing our work facilitating such projects as the most promising projects we identified so far do not have exceptionally large funding needs (i.e., can be done with millions of dollars rather than hundreds of millions). We are also considering other possible research directions. Starting around Q4 2021, we grew the team from one to seven people (including Fellows, Researchers, and a Research Assistant).

Concerning our public output, this year our team has:

Besides continuing several of the initiatives mentioned above, we also worked on:

Surveys and EA Movement Research 

Special Projects

Credits

This post was written by Kieran Greig. With contributions from Michael Aird, Janique Behman, Marcus A. Davis, Laura Duffy, Carolyn Footitt, David Moss, Rachel Norman, Abraham Rowe, Daniela Waldhorn, and Peter Wildeford. If you like our work, please consider subscribing to our newsletter. You can see more of our work here.
 

  1. ^

     All dollar amounts in this post are in USD. This amount and those in the next sentence of this footnote include expenditures and revenue for our special projects. The predicted revenue for 2022 is ~$12M with assets of ~$10.3M by year-end (excluding pledges that we have not yet  received).

  2. ^

     This post further outlines the consultancy model:

    “At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily "loan" their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn't make sense for the client to hire themselves, or fill the ranks of a new administration.”

  3. ^

     Please subscribe to our newsletter if you want to hear about job openings, events, and research.

  4. ^

     Note that we also worked, to differing extents, with close to 30 contractors throughout the year.

  5. ^

     These 55 full-time equivalents (FTE) include 40.5 FTE focused on research, 11.5 FTE on operations and communications, and 3 FTE on Special Projects: focused on fiscal sponsorship and new project incubation.

  6. ^

     Our team will have actually worked close to 80,000 hours this year!

  7. ^

     The financial allocation across departments fairly closely matches the time distributions.

  8. ^

     This team will examine crucial questions that may have a huge bearing on how philanthropic resources will be allocated across humans and non-human animals in present and future generations. One such project, already underway, is our work on interspecific comparisons of moral weight

  9. ^

     Note that we have proportionately split our operations teams costs across all these areas.

  10. ^

     One assumption used in these estimates is that they do not include any likely future revenue. That could make this particular funding gap estimate hard to interpret, as we are currently undergoing the grant renewal process with the funder who has previously totally covered the costs of this department. The results of that conversation could shift this estimate heavily.

  11. ^

     Some further brief context on the organization's history is that Peter Wildeford and Marcus Davis (our Co-CEOs) had considered starting an organization for a number of years before starting Rethink Priorities, but their initial funding request was rejected. From prior work, Peter and Marcus knew that having a clear impact through EA-related research was possible. They launched RP in 2018 with $12,500 of self-funding and as a six-month experiment. Initially, RP was fiscally sponsored by Rethink Charity.

    In 2019, the RP team significantly expanded. At the time, they mainly focused on animal welfare because it was an area in which high-quality research seemed neglected, there was low-hanging fruit, and many organizations on-the-ground were bottlenecked on strategic insight into what interventions and programs work best. That year was defined by our newly expanded team conducting some high-quality work that impressed others. Based on this reputation, we were able to attract significant funding from Open Phil, EA Funds, and several other donors.

    In 2020, Rethink Priorities amassed significant funding and started spending ~$750,000 a year, with a staff of 16, and also spun out of Rethink Charity as an independent organization. In 2021, we had 28 staff and a $2.1M budget. Upon the request of key stakeholders in the EA community, in 2021 we started to  launch teams to address AI governance and strategy as well as global health and development, and more significantly expanded our operations team.

  12. ^

     To be clear, this year RP started a new Special Projects team (housed within the Operations Department) to support initiatives, which will include incubating projects. Given our strong operations, RP envisions acting as a full-service fiscal sponsor for select promising EA groups. This structure could enable strong teams to focus on their core work rather than the day-to-day operations of their organization.

  13. ^

     Around 35 leading researchers and practitioners attended this retreat. We prepared for the retreat by surveying attendees on their views, organizing seminar discussions to build common context, and doing research work to clarify key strategic ideas. According to a survey at the end of the retreat, on average, the attendees found the retreat many times more valuable than the counterfactual use of their time, were very satisfied with the experience, and were excited for a sequel retreat in 6-12 months. Other outcomes include:

    - Exit survey results summarized attendees’ views on ~30 important strategic topics after LAISR discussions.

    - Attendees have formed discussion groups and reached out to non-participants to discuss new project ideas.

    - We’re starting to share anonymized copies of discussion/talk notes from LAISR with some non-attendees who would benefit from them.

    - We’re doing further research on some questions highlighted as important at the retreat.

  14. ^

     Condor Camp was a 10-day retreat for 13 talented Brazilian university students to learn about EA and longtermism. They were selected for accomplishments not related to EA, such as international STEM olympiad medals. According to our pre/post surveys, participants’ familiarity with and interest in topics like EA, AI safety, and longtermism increased considerably. Three months after the camp, at least seven of them have engaged in impactful activities, such as founding the first EA university group in Brazil, at the University of São Paulo, and reaching the final stages of other high impact programs, like the Open Philanthropy Undergraduate Scholarship. Our team has also been working with other movement builders in Colombia and elsewhere on local efforts and regional strategy in Latin America. This project is interested in further funding to continue with its next stage.

  15. ^

     Note reflections in this section mainly refer to those of leadership. Staff of the organization may have independent views.

  16. ^

     By a consultancy, we mean doing commissioned work in response to demands from EA-aligned organizations.

  17. ^

     In those cases, the deliverable was directly shared with that one particular stakeholder upon completion.

  18. ^

     Though we would also like to note that using the consultancy model this year did contribute to some backlog of reports that we would like to publish publicly, which we are now working on. We are also thinking more about how to further decrease the lag between when a project is ready for a stakeholder and when we might be able to publish it.

  19. ^

     We will use this stakeholder input to inform whether we pursue further projects with relatively long impact timelines, and potentially ones involving relatively large teams of contractors, too.

  20. ^

    We are one of the only groups who have worked on this seriously, and we think we have produced foundational pieces of work here. We think this positive track record contributed to our recent graduation from receiving grants from the EA Animal Welfare Fund to cover this work to receiving a larger restricted grant from Open Phil to further scale this work.

  21. ^

     We initially scaled in that area back in 2018-19.

  22. ^

     For instance, as previously mentioned, adopting a consultancy model with Open Phil within the Global Health and Development department.

  23. ^

     For instance, informing and incubating potential priority projects. We expand on some further reasons as to why we want to diversify in Our North Stars and Some Key Medium-Term Considerations

  24. ^

     For instance, within Global Health and Development, we have at least contributed to the following:

    - Open Phil using RP’s work in their medium-depth climate research.

    - Open Phil recommending GiveWell to add weather forecasting to their study of digital extensions for agriculture, an addition for which Open Phil is willing to pay.

    We are also excited about all the items mentioned in the above summary section with regard to Longtermism. For instance, we think the AI Governance and Strategy team’s survey on immediate goals for AI governance could (when published) also be quite influential for Open Phil’s grantmaking.

  25. ^

     For instance, a rather large majority of the Survey and EA Movement Research Department’s projects are private requests (for e.g. surveys, experiments, polling, and focus groups) from core EA organizations, with the rate of requests having increased substantially in recent months. However, we presently have to turn down some large commissions due to lack of staff capacity, and lack of adequate restricted funds in place to expand our team (or even to maintain the team at its current size). We turn down some large commissions because the vast majority of projects requested of us are highly time sensitive (i.e. organizations want them completed within a very fast timeline), so we need to have the staff already in place if we’re to take them on, as it’s not possible to hire staff in time to complete them even if they are offering more than enough funding to make it happen. All that said, we would still encourage organizations to approach us to see whether we have capacity for any particular project.

  26. ^

     We do feel there are improvements we can make but it is somewhat beyond the scope of this post to go over them here.

  27. ^

     We believe that via hosting events we can make significant progress on coordination within some global priority areas, and chip away at key strategic uncertainties within them. This is still to be defined but other areas in which we may want to lead events within include a biosecurity-focused event or a series of events examining AI strategy conditional on assuming some transformative AI timelines.

  28. ^

     Although we don’t report on outcomes of our work more formally here, we hope we still convey insightful summary data, and offer some useful initial reflections, and interpretations.

  29. ^

     Over the past month, we’ve conducted structured interviews with some key decision-makers and leaders at EA organizations that either use our work or that we want to use our work. As in previous years, we sought interviewees’ feedback on the general importance of our work for them and for the community, what they have and have not found helpful in what we’ve done, what we can do in the future that would be useful for them, and ways we can improve. To encourage frankness, interviewees were promised that the details of these conversations would not be made public.

  30. ^

     These assessments so far often look at how much money we spent in order to complete some project, and then attempts to evaluate the relatively direct amount of funding we think it influenced, drawing upon various reports from the funders in question over the years.  

  31. ^

     It could be worth noting that within the staff at RP, views differ significantly as to how much we should prioritize different areas.  

  32. ^

     Note that these apply to differing extents across departments and even differing extents within the teams in a department. There are also some long-term goals that apply to a lesser degree than those mentioned in the text, but are still ones we are quite interested in. Such long-term goals that are still of significant interest to RP include:

    - Helping to grow the EA Community,

    - Enhancing the talent pipeline for addressing global priorities,  

    - Raising standards within relevant fields, and

    - Helping discover new global priorities.

  33. ^

     For example, potentially further looking at often used moral weights within the sector, and/or the impact of using some subjective well-being measures.

  34. ^

     Our non-AI work was previously heavily focused on working with FTX Future Fund. Now that that team has disbanded, we’re currently discussing this and other ideas with many relevant stakeholders about what to do next, and would strongly consider their views when making such decisions.

  35. ^

     One such project, already underway, is our work on interspecific comparisons of moral weight

  36. ^

     Or, failing totally unrestricted funds, donations restricted to a cause area, but not to a specific project within that area or a specific subfield within it would be especially useful.  

  37. ^

     One example of this is some of our earliest work on invertebrate welfare. That was an area where, to a significant extent at first, we could only work on because we had enough unrestricted runway to self-fund ourselves doing so. We think that through this work we became an early mover within that important area and are now perhaps the single group that has most advanced that sub-field.

  38. ^

     There are cases where we would not continue to allocate internal funding resources to an area if it seemed like there wasn’t “market” demand for a service. 

  39. ^

     The simplifying assumptions that we use to arrive at these rough and approximate point estimates were:

    - We split overhead costs (including administrative expenses, communications and fundraising costs) across the different areas proportionately to the non-overhead budget across the different areas.

    - Similarly, we split unrestricted reserves proportionately across the different departments, to add to their already amassed restricted runway.  

    - We aim to reach 18 months of operating reserves for every single area.

    - We are including a 15% increase in existing costs mostly because cost of living raises and inflation are high right now.

    - Within scenarios, for simplicity, we also assume uniform growth both across departments and time.

    - We are not including funding towards, or hires for, any special project or incubated project.

    - And we don’t account for revenue that we think we are likely to receive in the future.

    Note this last point contributes to these estimates being tricky to interpret. For instance, in some areas we are currently undergoing the renewal process for major grants. Pending the outcome there, some area-specific gaps may decrease by a million dollars or more.

  40. ^

     That said, we are still more than happy to engage in discussion if a funder is interested in restricting to the Special Projects team specifically.

  41. ^

     Worldview Investigations will have a different growth trajectory to other departments. Next year will be its year of establishment, with only ~1 FTE initially. If we were to moderately expand the team from there, it would be ~1 FTE more hired in 2023, but starting 3 months into the year so ~0.75 FTE expenditure over the year. To achieve the high-growth scenario from there, next year we would also do significant work with contractors, equivalent to another ~1 FTE in expenditure. Combining that with existing staff, that's ~2.75 FTE in 2023 under the high-growth scenario. The moderate growth scenario through 2024 would be to reach that same level that year. The high-growth scenario into 2024 would then be for both of those staff managing ~3-4 FTE contractors, for a total of ~8-10 FTE in 2024.

  42. ^

     This is the same total FTE including temporary staff and contractors across the year.

  43. ^

     Note this includes another hiring round for our Longtermism department. We expect to make roughly 3 FTE hires as a result of that hiring round. The Special Projects team is also currently conducting a hiring round.

  44. ^

     Although the expenditure this year for longtermism was similar to animal welfare, the amount required for longtermism under the no-growth scenario is less because we have amassed more restricted grants for this area. In general, the amounts in other areas may seem higher or lower relative to our expenditures on them, because it has been harder or easier to fundraise in those areas.

  45. ^

     If we were to receive a much larger amount of unrestricted funding relative to our overall operating expenses, then we would give a more detailed rationale for our likely future spending priorities.

  46. ^

     For example, as previously mentioned, growth is a key consideration for us in the medium-term and scaling sustainably is one of our north stars.

  47. ^

     Note that there is some significant heterogeneity here. That is, the respective factors that could be growth constraints do seem to significantly differ in their likelihood of applying across our departments.  

  48. ^

     Note the credences reported here aren’t independent of one another, so it doesn’t follow that our overall confidence in one or another growth scenario can be estimated by multiplying them through. There are situations where one factor could cause other factors to be constraints. For instance, lacking operations capacity could cause low morale.

  49. ^

     In particular this has a different likelihood in different domains, but we don’t think it's much higher in any given area, not above ~10%.

  50. ^

     Once again, we want to emphasize that there is some significant heterogeneity across our departments. That is, this factor seems to significantly differ in their likelihood across the departments of our organization. The subjective credences here are for a rough weighted average chance across the entire organization.

  51. ^

     For instance, even if we have 10 people we want to hire, and have the management and operations capacity to have them do good work, it will still take time for people to join, get onboarded, become productive, etc.

  52. ^

     This could include creating specific committees within the organization to help address any apparent needs. For instance, this year we had a committee lead the process to finalize our values. We also established a permanent committee focused on justice, equity, inclusion, and diversity.

  53. ^

     We currently have strong operations. We would front-load operations hires in growth scenarios, or even use contractors for some specific needed operations work.

  54. ^

     For example, we could extend hiring rounds, hire another recruitment specialist, or some further operations staff.

  55. ^

     As some further indication, for the 32 positions that we hired for this year, we received in total a few thousand applications, and we’ve received an average of >100 responses to each of three expression of interest forms (for our AI Governance and Strategy, General Longtermism, and Special Projects teams). We see those all as signals that there is strong demand to work with us in various capacities, even if only a relatively small fraction are acted upon.  

  56. ^

     This doesn’t necessarily apply to the same extent across our departments. For instance, the Longtermism department has likewise had many well-qualified applicants, but in 2022 has not been notably constrained by funding.

  57. ^

     However, to be clear, we do accept and track restricted funds by cause area if requested by donors.

  58. ^

     The following list contains most of our work, but it is not quite a fully comprehensive list of all of RP’s research this year (due to confidentiality or informational security concerns), especially for the Longtermism department. Also, due to the timing of when we do our year in review posts, it includes some work that happened since our last year in review post (November 2021) but technically didn’t occur in the 2022 calendar year.

  59. ^

     Throughout this list “we” is often used to denote when one or more members of RP were/are involved.


James Ozden @ 2022-12-08T20:02 (+5)

Thanks Kieran, this is very interesting! I would also be quite keen to hear about what RP has tried and didn't go so well. For example, I know RP has been trying to launch the Insect Welfare Project (or whatever it'll be called ultimately) since early 2021 or so. I know you had to re-hire for this recently as the last executive director left, so I was wondering if you could share some learnings from that? For example:

etc. This list is non-exhaustive so would be keen to hear other useful lessons from this experience! Sorry if this comes across as critical, as it's not my intention, but I'm genuinely curious on what's been happening re Insect Welfare Project as it's a much needed endeavour. 

Vasco Grilo @ 2022-12-06T15:06 (+4)

Thanks for the post!

Much of our commissioned work in 2022—particularly in Global Health and Development and AI Governance and Strategy—was not published.

Would it be possible to provide further details, maybe using fictitious examples?

kierangreig @ 2022-12-07T09:31 (+9)

Thanks for your engagement! 

Yes, for instance, as mentioned in the appendix, some non-fictitious examples for Global Health and Development are: 

We produced numerous research reports for Open Phil assessing the potential of global health and development interventions, looking for interventions that could be as or more cost-effective as the ones currently ranked top by GiveWell. This included full reports on the following:

  • The effectiveness of large cash prizes in spurring innovation (the report was also shared with FTX Future Fund, and another large foundation).
  • The badness of a year of life lost vs. a year of severe depression.  
  • Scientific research capacity in sub-Saharan Africa.
  • The landscape of climate change philanthropy.
  • Energy frontier growth (this report explores several of the key considerations for quantifying the potential economic growth benefits of clean energy R&D).
  • Funding gaps and bottlenecks to the deployment of carbon capture, utilization, and storage technologies.
  • A literature review on damage functions of integrated assessment models in climate change.
  • A confidential project that we won’t give further details on.
  • Detailing the process of the World Health Organizations’s prequalification process for medicines, vaccines, diagnostics and vector control, as well as the potential impact of additional funding in this area.
  • Describing the World Health Organization’s Essential Medicines List and the potential impact of additional funding in this area.
  • Whether Open Phil should make a major set of grants to establish better weather forecasting data availability in low- and middle-income countries (LMICs).
  • Further examination of hypertension, including its scale and plausible areas a philanthropist could make a difference.    

And for AI Governance and Strategy respectively, some examples could include the following: 

Ongoing projects include the following: (Note: this list isn’t comprehensive and some of these will soon result in public outputs.)

  • Developing what’s intended to be a comprehensive database of AI policy proposals that could be implemented by the US government in the near- or medium-term. This database is intended to capture information on these proposals’ expected impacts, their levels of consensus within longtermist circles, and how they could be implemented.
  • Planning another Long-term AI Strategy Retreat for 2023, and potentially some smaller AI strategy events.
  • Thinking about what the leadup to transformative AI will look like, and how to generate economic and policy implications from technical people’s expectations of AI capabilities growth.
  • Mentoring AI strategy projects by promising people outside of our team who are interested in testing and building their fit for AI governance and strategy work.
  • Preparing a report on the character of AI diffusion: how fast and by what mechanisms AI technologies spread, what strategic implications that has (e.g. for AI race dynamics), and what interventions could be pursued to influence diffusion.
  • Surveying experts on intermediate goals for AI governance.
  • Investigating the tractability of bringing about international agreements to promote AI safety and the best means of doing so, focusing particularly on agreements that include both the US and China.
  • Investigating possible mechanisms for monitoring and restricting possession or use of AI-relevant chips.
  • Assessing the potential value of an AI safety bounty program, which would reward people who identify safety issues in a specified AI system.
  • Writing a report on “Defense in Depth against Catastrophic AI Incidents,” which makes a case for mainstream corporate and policy actors to care about safety/security-related AI risks, and lays out a “toolkit” of 15-20 interventions that they can use to improve the design, security, and governance of high-stakes AI systems.
  • Experimenting with using expert networks for EA-aligned research.
  • Trying to create/improve pipelines for causing mainstream think tanks to do valuable longtermism-aligned research projects, e.g. via identifying and scoping fitting research projects.
Vasco Grilo @ 2022-12-07T12:44 (+8)

Thanks, and sorry for not having checked the appendix! 

It looks like it would be quite valuable to publish that research, even if just as posts which would contain the summary and link to the relevant report, to save time. This would not be possible for the ones containing information hazards, but I hope there will not be many under such conditions.

James Ozden @ 2022-11-25T08:32 (+4)

Formatting note: your footnotes seem to link to an external private Google doc that I can’t view. Might be better to unlink them and leave them as normal footnotes!

kierangreig @ 2022-11-25T09:36 (+5)

Thanks for flagging! I will fix that now :)

EdoArad @ 2022-11-25T16:15 (+2)

(also the inner-doc links inside footnote 46 point to the doc)

kierangreig @ 2022-11-25T16:55 (+1)

Ah, thanks. Fixing that now :) 

Gina_Stuessy @ 2022-12-01T04:08 (+1)

Formatting thing: you may have meant to indent some bullets under "Work on any single area can gain from our working on multiple areas:"

I think this b/c it ends with a ":"

kierangreig @ 2022-12-01T05:45 (+2)

You're right! Just updated :)