Rethink Priorities’ 2022 Impact, 2023 Strategy, and Funding Gaps
By kierangreig🔸 @ 2022-11-25T05:37 (+108)
Key Takeaways and Introduction
Rethink Priorities’ mission is:
We address global priorities by researching solutions and strategies, mobilizing resources, and empowering our team and others.
Our vision is:
All humans and nonhumans can flourish to their full potential, and we achieve existential security.
Over the course of 2022, Rethink Priorities (RP) will have spent ~$7.5M USD[1] and worked on ~60 different research pieces, with ~33% completed under a consultancy model.[2] All of our published research can be found here.[3] During 2022 we also hired 32 new team members for a total of 58 permanent staff,[4]corresponding to 55 full-time equivalents.[5] The time distribution for our current research areas this year totaled[6] 36% of time spent working on research relevant to animal welfare, 36% on longtermism, 17% on global health and development, and 10% on surveys and EA movement research.[7]
- Key changes for RP this year include:
- The organization has significantly expanded to work on global health and development and AI governance and strategy.
- We’ve established a Special Projects team to help support promising EA initiatives, enabling strong teams to further focus on their core work rather than on operations.
- We plan to launch a Worldview Investigations team next year.[8]
- RP functions as:
- A consultancy doing commissioned work in response to demands from EA-aligned organizations.
- A research institute driven by research agendas we set according to our own priorities.
- A think tank aiming to inform public policy to improve the world.
- An accelerator, incubator, and base for new priority projects.
- We encourage external stakeholders to approach us to see whether we have the capacity to take on particular projects of interest. If you would like RP to consult on, work on, or host some high-impact project, please do contact us.
- Interested readers may also like to see Rethink Priorities’ Leadership Statement on the FTX situation.
The remainder of this post:
- Firstly:
- Addresses some background context and our theory of change.
- Summarizes our 2022 impact items across global priority areas (with a fuller list of milestones and accomplishments in the appendix).
- Conveys some of our initial reflections on our 2022 impact.
- Secondly, it sketches our strategy for next year, including:
- Why we’ll continue working on multiple global priority areas.
- Some “north stars” of our work within global priority areas, specifically producing/disseminating crucial insights and otherwise driving progress (e.g. by accelerating priority projects).
- Key considerations of our organization for the medium-term, such as focusing on key stakeholders, addressing growth bottlenecks, and appropriately capitalizing on growth opportunities without being overly reliant on specific funders.
- The values that our work should epitomize, which are centered on striving for impact, seeking truth, empowering our team, aiming for excellency, and nurturing innovation.
- A brief expansion on the varying area-specific strategies.
- Lastly, we use a number of simplifying assumptions to indicate rough and approximate funding gaps through year-end 2022 and 2023 under three different growth scenarios for the organization. These scenarios were i) no-growth, ii) medium-growth (25% growth in 2023), and iii) high-growth (somewhat more than doubling over 2023-2024).
- The respective total funding gaps through year-end 2022 under the growth scenarios were $5.3M, $8.3M, and $13M (note these amounts are set to give us 18 months of operating reserves as of the beginning of 2023 for our programs).
- In the no-growth scenario, area-specific funding gaps through year-end 2022 consist of:[9]
- $1.4M for Animal Welfare.
- $0.7M for Longtermism.
- $1.8M for Global Health and Development.[10]
- $1.2M for EA Movement Research and Surveys.
- $0.21M for Worldview Investigations.
- We then list and provide some analysis of factors affecting our growth and reasoning regarding the extent of their consequences.
- Finally, this post notes some reasons to consider funding RP, as well as how to give.
2022 Impact
Some Background Context and Our Theory of Change
For most of RP’s history,[11] we’ve aimed to achieve impact mainly by helping grantmakers and on-the-ground organizations to improve their decision-making and, in turn, significantly increase the impact of their work. After doing so for a few years, we have come to further recognize both:
- Although research related bottlenecks are often significant, in a number of cases a different factor most limits impact. For some of the most promising opportunities, there simply don’t exist adequate options to absorb aligned resources.
- RP has the organizational capacity, knowledge, and ability to secure funding to initiate promising projects in these areas.
So, to continue driving progress in global priorities, RP has become increasingly interested in identifying new promising projects that could absorb a significant amount of resources, and in incubating or otherwise supporting these initiatives to launch and develop.[12]
In addition to supporting these special projects, we have also expanded both our personnel—particularly in recent times—and our research agenda. Namely, within this past year, we’ve expanded our work addressing global health and development, AI governance and strategy, and other longtermist issues. We’ve also further differentiated our approach according to the context of each global priority in order to act on the specific outstanding impact opportunities in a given area. As we now work across a number of quite different areas, that differentiation in practice means we now perform a wide combination of both analysis and actions. Given this approach, and our relatively recent scaling in some areas, RP did much more consultancy-like research this year. Much of our commissioned work in 2022—particularly in Global Health and Development and AI Governance and Strategy—was not published. Furthermore, we think much of our longtermist work this year could be impactful via informing policy, and perhaps also through informing others’ research and career decisions.
As a result of all the above, across the global priority areas that we work on, RP is a combination of all the following:
- A consultancy doing commissioned work in response to demands from EA-aligned organizations.
- A research institute driven by research agendas we set according to our own priorities.
- A think tank aiming to inform public policy to improve the world.
- An accelerator, incubator, and base for new priority projects.
To put it simply, our general theory of change is as follows:
2022 Impact Summary
In 2022, RP has achieved many milestones and accomplishments across the global priority areas in which we work. We summarize them across departments in this section (and provide a fuller list in the appendix). Afterwards, we will discuss uncertainties regarding the impact of our work, and note how we are working to better evaluate this and incorporate our findings into RP’s strategy going forward.
Animal Welfare
Notable accomplishments from the Animal Welfare department include:
- Consulting with Open Philanthropy (Open Phil) in various capacities, including using quantitative estimates to assess the effectiveness of different farmed animal welfare interventions.
- Publishing seven articles on the EA Forum and RP’s website on topics as varied as cultivated meat, rodent birth control, the relative importance of the severity and duration of pain and whether the trajectory of pain matters, reducing aquatic noise, farmed fish and shrimp, and public support for bans on slaughtherhouses.
- Publishing two articles on black soldier flies in peer-reviewed scientific journals, and issuing three public comments to administrative agencies in the United States and the United Kingdom regarding insect farming.
- Continuing to lead regranting opportunities for improving European Union (EU) farmed animal protection policies with negotiations starting in late 2023.
- Hosting:
- An academic conference on interspecies comparisons of welfare (recordings available) together with the ASENT project at the London School of Economics.
- The inaugural Effective Animal Advocacy Coordination Forum with ~30 key movement members to further connect and discuss strategy issues for the movement.
- Continuing our ambitious Moral Weight Project to study different species’ capacities for welfare, which will inform prioritization of philanthropic spending between nonhuman animals, and potentially between humans and nonhuman animals as well. This project involved working with a team of 12 academic contractors who ambitiously reviewed 95 welfare-relevant traits across 11 animal species and produced initial results on interspecific comparisons of moral weight. We plan to publish a number of reports, including:
- The Welfare Range Table (published Nov 7th)
- Theories of Welfare and Welfare Range Estimates (published on November 14th)
- Using Neuron Counts as Proxies for Animals’ Relative Moral Weights (to be published by November 28th)
- Capacity for Welfare and the Unity of Consciousness (to be published by December 5th)
- Phenomenal Unity (to be published by December 12th)
- Preliminary Welfare Range Estimates (to be published in January 2023)
- Probability of Sentience and Welfare Ranges across Insect Life Stages (to be published in January 2023)
Global Health and Development
In terms of research, the Global Health and Development department completed 15 projects on topics such as mental health, climate change solutions, medicine in the developing world, and more (please see the appendix for a fuller list). Twelve of these projects were commissioned by, and submitted to Open Phil and the two remaining reports were produced in response to other major funders. We are aiming to publish at least five of the reports prepared for Open Phil on the EA Forum by the end of the year. The Global Health and Development team also expanded significantly in 2022, adding six staff (three research assistants, two managers, and one researcher.)
Longtermism
The Longtermism department gradually grew from a headcount of three at the start of 2022 to a total of 19 (including research assistants, as well as temporary fellows, and temporary contractors) at the time of writing. This growth was intended not only to expand our capacity to create and share impactful research products/insights but also to improve the expected future impact of the people hired (e.g. helping fellows gain knowledge, skills, and connections that increase their ability to secure and excel in future roles). We intend to further evaluate next year how well we’ve served as a talent pipeline.
Longtermism staff are also working on many outputs expected to become public in the coming months (e.g. a survey on intermediate goals for AI governance which we co-developed with Luke Muehlhauser of Open Phil, have distributed, and are producing a writeup about). Finally, the team has produced or are working on some outputs expected to remain nonpublic, for reasons such as information hazards or the projects having been quickly undertaken for a specific purpose (such that adapting them for publication isn’t worthwhile).
Beyond contributing to talent pipelines and producing research, accomplishments include:
- Running the Long-term AI Strategy Retreat (LAISR).[13]
- Running Condor Camp.[14]
- Collaborating with the Special Projects team to advise on which fiscal sponsees to take on.
- In collaboration with the Special Projects team, the AI Governance and Strategy team also provided support and advice to Epoch, a promising new organization forecasting the development of advanced AI.
- Building relationships and strengthening our network with other longtermism-aligned groups and individuals.
Surveys and EA Movement Research
In terms of public reports, this department published a post on the percentage of United States residents who had heard about EA, and whether their impressions were favorable. As in past years, RP once again led on and launched the EA Survey. Most of the Survey Team’s work in 2022, however, involved using its skills to respond to requests to help other EA organizations. Unfortunately, much of this work tends to be confidential. In the last 12 months we have completed more than 15 substantial paid commissions from core EA and longtermist organizations, including:
- Conducting private data analysis for core organizationss, including producing reports on their annual impact data, and analyzing core metrics over time.
- Several message testing projects to understand how best to engaged in EA and longtermist outreach to the public.
- Snap ‘message testing’ (within 24-48 hours) to support particular events
- Conducting surveys of niche target audiences relevant to EA to understand baseline awareness of EA ideas.
- Running experiments for a number of different orgs to assess the effectiveness of different advertisements.
- Working with 1 Day Sooner to understand public and expert attitudes towards human challenge trials
In addition, we’ve supported various orgs, EA decision-makers and the broader community with a large number of pro bono requests, including:
- Providing custom private analyses of EA Survey data or data from our other surveys.
- Providing statistical consulting for EA projects or EA-related academic papers.
- Consulting on survey design.
Some Initial Reflections on Our 2022 Impact
In this section we[15] briefly:
- Further overview the quantity of our work, and offer some reflections regarding the extent to which we use one main impact channel.
- Note that we are at least somewhat more established in animal welfare than other areas as we have a greater volume and variety of outputs there, that we view time since scaling our work there as the main reason for this, and express some reasons for bullish sentiment regarding impact in that area.
- Communicate excitement about our initial scaled work in both global health and development, and longtermism while registering interest in diversifying impact channels within them.
- Express satisfaction with and interest in collaborating with other organizations, hosting even more important events, and expanding our fiscal sponsorship and incubation services.
- Indicate particular interest in conducting more ex ante and ex post analyses to estimate impact, and the return on investment of our work.
- And lastly signal that this year we made some significant progress on a number of internal-facing operational items that should really lay the groundwork for impact in the future.
Over 2022, we worked on approximately 60 different research pieces. Of those, ~33% were completed under a consultancy model,[16] with Open Phil as our main client,[17] and a number of other projects also importantly sought impact via influencing Open Phil. That means, across the organization this year, influencing Open Phil was the most frequently pursued impact channel. We think that the current percentage of consultancy-type projects at RP is acceptable.[18] But, as we further develop within certain areas (e.g. global health and development), pending sufficient interest, we will likely want to continue that consultancy work while also venturing further into other impact channels, such as through progressing an independent research agenda. We would also like to identify more key stakeholders that we can work closely with, and further build our relationships with them.
In some areas we are more established than in others. Particularly, our Animal Welfare department seems to have greater volume and variety of outputs. Some of our most important reflections related to its impact include:
- We believe we are significantly informing the animal welfare strategy at Open Phil and some other large funders.
- We think the Moral Weight Project could potentially shift huge amounts of resources. We look forward to seeing how some key stakeholders and the community at large react to it in coming years.[19]
- For years, we feel we have conducted some pioneering work on invertebrate sentience and welfare.[20]
The greater volume and variety of outputs from the Animal Welfare department seems importantly caused by the longer time since initially scaling in the area,[21] compared to say, the Longtermism and Global Health and Development departments, which we have only really initially scaled in this past year. In these newer areas, we tend to be more dependent on impact through certain channels than others,[22] and are generally keen to further work on some new impact pathways.[23] Still, we feel we’ve already had some promising traction in these newer areas.[24] And despite being newer to them, we think that in the years to come we could play a quite important role in their evolution.
We are relatively satisfied to have worked or consulted with more than 20 organizations on high-impact projects, and are excited to continue to scale that collaborative work.[25] We encourage those interested in RP consulting on, or working on, or hosting some high-impact project, to please contact us.
We are also generally satisfied with our other newer efforts to further help build out the EA ecosystem.[26] Importantly, that includes, new to us this year, our hosting or significantly helping host four different external-facing events or retreats, significantly including (in our view) key forums for effective animal advocacy, and AI governance and strategy. We feel that these events went well, and will likely pursue further iterations (possibly in different areas, too).[27] In addition to the seven organizations or projects that we provided fiscal sponsorship services or incubation to, we have also had tens of expressions of interest from other projects and we expect to offer support for more new projects over the next year.
We continue to be internally driven by believing that “good research is not enough” and try to update our strategy to ensure our analyses actually lead to actions. As still mainly a research organization though, we’re usually a step or two removed from direct work, and, thus, it can be challenging to determine what impact we’re having on the world. We’re very interested in ascertaining how those in a position to implement are acting on our work, if at all, and we’re committed to tracking our impact in multiple ways. We are particularly interested in conducting more ex ante and ex post analyses to estimate our impact, and the return on investment of our work, and then taking the time to externally communicate these.[28] This analysis could include surveys and/or interviews of stakeholders[29] as well as case studies. We have now done some more internal work on these assessments, but we would like to put more time and effort into these before potentially communicating views more publicly.[30]
Not captured in any of the above, but nevertheless quite important inlaying the groundwork for future impact, is that we have made significant progress on a number of internal operational items this year. Some items include improving processes for onboarding new hires, submitting new funding proposals, keeping others up-to-date with our work, and publishing pieces. We also created further documentation related to our governance (and will be looking to expand our board over the next year) and established a committee focused on justice, equity, inclusion, and diversity within the organization. We have also done more work in formalizing our strategy internally this year, and we’ll turn to that now.
2023 Strategy
Our Rationale for Working on Various Cause Areas
Our work now addresses animal welfare, artificial intelligence, climate change, global health and development, investigations of worldviews, longtermism, and EA movement research. Next year, we will continue to work on all of these global priorities (to varying extents) because our leadership, on aggregate, believes:[31]
- There is significant uncertainty about which priority area is most impactful.
- There may be diminishing returns to RP focusing on any one priority area.
- A large amount of resources are not fungible across these different areas.
- Non-fungibility, diminishing returns, and significant uncertainty means it can actually be easier for us to do great research in multiple cause areas than figure out which cause area we ought to specialize in.
- Work on any single area can gain from our working on multiple areas:
- Teams have much greater access to centralized resources, staff, funding, and productive oversight than what they would receive if the team existed independently and solely focused on that priority.
- Teams can somewhat more easily access the support of other researchers, who have a background in other disciplines, so they have the option to learn from work in other areas.
- Working across different priorities allows the organization to build capacity, reputation, and relations, and maintain option value for the future.
- Relationships within an area can be useful for work in another area.
- There is a lot of value in advancing the project of effective altruism, and the areas we operate in are areas that the community prioritizes.
- In combination, the above suggests that we can have the most impact (in expectation) by working across all these global priority areas.
Our North Stars
Across all the global priority areas that we are active in, we also have the following overarching “north stars” that outline fundamental dimensions of our currently intended direction:[32]
- Produce and/or disseminate key insights to increase the effectiveness of others’ efforts on global priorities: Billions of dollars and millions of person-hours are currently spent addressing global priorities, and even more will be spent in the future. Producing and/or disseminating key insights can dramatically improve their allocations and efforts. Therefore, we’ll attempt to uncover required key insights by having top researchers work on crucial questions. We’ll also try to garner connections with relevant decision-makers in order to deliver insights and augment their efforts on global priorities.
- Otherwise drive progress on promising yet neglected ways to address global priorities, for instance via accelerating priority projects: We want to be able to impactfully drive progress on global priorities, even when influencing others doesn’t seem like the most promising pathway to do so. Examples of this could include times when instead of influencing others we need to complete some specific fundamental research, help trial some new intervention, or found an organization. Global priorities are usually quite neglected. Within them, there can be numerous promising avenues or projects that are entirely neglected. Given adequate resources, and research and operations capacity, for a number of them, we may be well or even best positioned to be an early mover. We think our analysis can determine when this is the case, and when we should attempt to accelerate some neglected projects in order to help address a global priority.
- Maintain and strengthen our reputation and relations: We want to build a really good foundation for future additional work addressing the two previously mentioned broad longer-term directions. This “north star” is rooted in our belief that the amount of valuable work that could be completed is quite large. It follows that limiting factors to scaling are rather important. In fact, longer-term, our reputation and relations seem like some key determinants of the quantity and type of work we can do.
- Sustainably yet significantly scale to further increase our impact: Again, the amount of valuable work that could be completed is quite large. Provided we perform well, in the future, we should be in a position to even further scale. We want to correctly capitalize on that, and scale soundly (i.e. in such a way that maintains operational efficiency), but still significantly.
Some Key Medium-Term Considerations
There are four related key considerations we have identified over the medium term for our organization:
- Continue to focus on enhancing and maintaining strong relations and reputation with key stakeholders: As our relationships and reputation will be the foundation for a lot of possibilities in the future, we want them to be increasingly strong. To do so, we need to maintain professionalism in what we deliver, set high-quality standards for our work, and have an increasing number of research outputs.
- More actively address bottlenecks on further growth: We want to ensure that we have adequate management capacity, clear strategic direction, solid internal systems, the requisite organizational resources and policies, and just overall planning, to grow in the right way. We are going to be more proactive in these general areas.
- Appropriately capitalize on further growth opportunities: The ceiling for our impact and scale is still really large, as only a small fraction of potentially high-impact research questions have actually been investigated, and an even smaller subset acted upon. There are various inputs that could cause us to grow, such as:
- A high-net-wealth individual who is new to EA, and wishes to support work RP would like to see done.
- Large increases in support from extant relationships.
- Increasing the number of funders yielding a net 'large' increase in support.
- Strive not to be overly reliant on just a few or one funder, for impact or funding purposes. We are currently often quite reliant on relatively few donors (or a single one for some projects). If a funder rescinded their support, it would require us to rapidly find more funding. Also, if the funder did not make any changes to their thinking or decisions in light of our research, that may severely hamper our impact. To mitigate the financial risk to our programs, plus some risk to our impact by focusing heavily on one funder, we would like to somewhat further engage other philanthropists/foundations and organizations, and explore some new funding models (e.g. some types of fiscal sponsorship for priority projects).
Our Values
The values RP will attempt to epitomize are:
- Striving for impact: We are determined to improve the world across humans and nonhumans, both in the present and future as much as we can.
- Seeking the truth: We believe that all ideas are subject to scrutiny and strive to deepen our understanding of the world by investigating complex problems and posing difficult questions (even if it makes us uncomfortable). We apply the concept of intellectual honesty in our work.
- Creating an empowering environment: We think our staff are brilliant and we are committed to helping them flourish and succeed. We strive to create a welcoming, trusting, safe, fair, and supportive culture that promotes collaboration, honest conversations, and respectful exchanges of ideas.
- Aiming for excellence: We aim for nothing less than the best quality of research and highest standards of operational excellence. We are rigorous, ambitious, and transparent about our strengths and limitations.
- Nurturing a culture of innovation: We cultivate a learning mindset and encourage creative, independent thinking.
Hopefully that all gives a reasonably good sense of our core strategy for the coming year! However, as mentioned previously, we often try to differentiate our strategic approach across areas in order to best match specific impact opportunities within them. Given that, we’ll briefly expand on area-specific strategies now.
Area-Specific Strategies
Animal Welfare:
- We will continue to directly support grantmakers and organizations in the farmed animals space by providing decision-relevant analyses of current interventions, working to identify new promising interventions, and providing missing theoretical inputs for their work.
- Outside of that, we will continue to focus on neglected but highly promising areas in both the invertebrate and wild animal welfare space.
- We will further reflect upon and then potentially pursue some higher-leverage impact opportunities (e.g. perhaps more directly tackling some movement-level strategic uncertainties, and partnering with others for some further coordination efforts).
Global Health and Development:
- We will continue to significantly consult with Open Phil.
- We will explore opportunities to conduct actionable research for other organizations, and work with other large funders in the space.
- We will do some more scoping of non-consulting work and likely tackle topics we think are high-value that other EA-aligned players in the space haven’t looked at, or may take a different approach to important areas that have already been examined.[33]
Longtermism:
- One main focus will continue to be on fully establishing a strong team of permanent researchers and rotating fellows tackling AI governance projects, with its priorities guided by a mix of Open Phil, other key stakeholders, and our own sense of what’s most important.
- Simultaneously, we will continue to work out possible additional research directions for the non-AI portion of our Longtermism department.[34]
- We will continue to work with the Special Projects department to help incubate promising longtermist projects.
- We will also continue to try to improve longtermism-relevant talent pipelines, including via providing roles for people relatively new to the space and actively helping our staff gain knowledge, skills, and connections.
Surveys and EA Movement Research:
- We will use surveys, message testing, and focus groups to explore how to best talk about longtermism and effective altruism with the general public and interested policymakers. We’d like to do more of our own independent work in this area in addition to the consulting work we currently do directly for existing orgs.
- We will analyze the latest iteration of the EA Survey, which is running now, and continue other significant efforts to understand the current status of the movement.
- We will continue to have capacity—especially after Q1 2023—to do more bespoke survey work on behalf of the EA community and EA-adjacent organizations.
Worldview Investigations:
- We aim to launch a new Worldview Investigations team. It will focus on examining crucial questions that may have a huge bearing on how philanthropic resources will be allocated across humans and non-human animals, present and future generations.[35]
- For now, this work will mainly aim to improve the allocation of EA resources across and within cause areas, perhaps by completing foundational work that a major funder specifically commissions; but secondly, by also helping lay the foundation for future research into Worldview Investigations.
- We will seek further collaborators and/or hires here, and start tackling some research questions in a promising way.
Funding Gaps Through Year-End 2022 and 2023
RP’s most urgent funding need is for further unrestricted donations,[36] which help ensure we have the greatest ability to direct funds to where they can be most effective and that we can react quickly to new opportunities that arise. We have sometimes had the greatest impact when we had this flexibility to readily explore new options that weren’t easy to find funders for at the time,[37] and feel most comfortable in pursuing those opportunities when we have significant funding that is not restricted to specific projects. Flexible funds also allow us to incubate new ideas to the point of proof of concept prior to attempting to justify them to funders, as well as explore ideas that don’t end up working out. We will allocate resources across (and within) global priorities based on stakeholder demand and preferences, RP leadership’s beliefs, our internal growth capacity, as well as the landscape of the respective areas.[38]
However, given that we’re often asked about our current funding needs in each of the areas in which we work, we have included estimates in the below table. Please note these are rough and approximate point estimates which use a number of simplifying assumptions to estimate current funding gaps under three different growth scenarios: i) No-Growth, ii) Moderate-Growth, and iii) High-Growth.[39] We report results for both funding gaps through both year-end 2022, and year-end 2023. To be clear, the revenue goals across the organization for these growth scenarios encompass maintaining 18 months of runway within each area at the end of those respective years. These models also don’t include funding towards, or hires for any special project or incubated project. The main revenue model for our special projects is for those initiatives to fundraise independently and for RP’s Special Projects team’s costs to be covered as line items within the budgets of the special projects themselves.[40]
Further information regarding the growth scenario conditions (with the exception of Worldview Investigations[41]) and our confidence in meeting them follows:
- No-Growth Scenario
- In each cause area, we continue to have the same number of staff[42] as currently budgeted,[43] and can hence tackle a similar quantity/size of projects as in 2022.
- This scenario would maintain RP at its current size and avoid any cutbacks.
- We have very high confidence that we could effectively deploy funding at this level in order to maintain our organization and achieve significant impact moving forward.
- Moderate-Growth Scenario
- We grow ~25% over 2023, meaning we hire another 10-20 full-time equivalents over the course of 2023, split proportional in size across the different areas in which we work. This scenario will enable us to deploy greater research capacity by hiring about a dozen additional researchers to address some of the most important questions we’ve identified.
- We are highly confident that we could effectively deploy funding at this level to build up our organization for sustainable impact moving forward.
- High-Growth Scenario
- We shoot for a total of 125% uniform growth over the next two years. This growth plan would entail hiring 30-40 staff in each of the next two years. This ambitious scenario would enable RP to scale approaching the maximum rate we think is possible. Doing so would involve hiring around 10 additional operations staff in 2023 to prepare for a much larger research team, then several dozen more researchers over the remainder of 2024.
- We are less confident about our ability to execute the high growth plan, but it does feel moderately plausible. We think this number represents a good view for the maximum amount of money that we could possibly put to productive use, such that any money raised beyond this amount would very likely be deferred to cover spending in 2024 or later.
Given the assumptions used to reach our funding gap estimates, we do want to be clear that these estimates are imperfect. We still list them here because we think they represent useful albeit rough approximations of the amount of funding that we are very highly confident, highly confident, and moderately confident we could absorb over the next year, and that they correspond to reasonable bounds on our core revenue goals until the end of this year and next.
- Funding gaps for the different growth scenarios through year-end 2022
(Recall that this is a rough estimate and is based on aiming to have 18 months of reserves at the end of 2022.)
Funding Gaps | Animal Welfare | Longtermism | Global Health and Development | EA Movement Research / Surveys | Worldview Investigations | Total |
No-Growth Scenario | $1.4M | $0.7M[44] | $1.8M | $1.2M | $0.21M | $5.3M |
Moderate-Growth Scenario | $2.3M | $1.8M | $2.4M | $1.6M | $0.34M | $8.3M |
High-Growth Scenario | $3.7M | $3.4M | $3.2M | $2.1M | $0.55M | $13.0M |
- Funding gaps for the different growth scenarios through year-end 2023
(Recall that this is a rough estimate and is based on aiming to have 18 months of reserves at the end of 2023.)
Funding Gaps | Animal Welfare | Longtermism | Global Health and Development | EA Movement Research / Surveys | Worldview Investigations | Total |
No-Growth Scenario | $3.7M | $3.4M | $3.2M | $2.1M | $0.34M | $12.7M |
Moderate-Growth Scenario | $5.8M | $5.8M | $3.9M | $3.4M | $0.95M | $20.0M |
High-Growth Scenario | $8.2M | $8.6M | $5.2M | $4.1M | $2.0M | $28.5M |
In actuality, our growth will probably look like some combination of these scenarios across the different areas, but it really does depend on how fundraising goes. We’d be happy to discuss the details of how each of these budget levels, either across the organization or within a specific area, would unfold with funders upon request.[45]
Factors Affecting Our Growth
First, we want to emphasize that as we grow, we plan to even further address factors that could become growth constraints.[46] Still, it could be worth us expanding on our confidence that we can further grow effectively here. To do so, in the following table we consider what in our view are the eight factors that could constrain our growth. Note that here we use “a constraint on growth” to mean something that would cause us to pause our growth plans because we need to address this factor before returning to them. For each factor, we report our subjective credences for some weighted-average chance across the entire organization[47] that each is a constraint in the various growth scenarios considered, and outline our reasoning as to why that is the case. Afterwards, we offer a brief summary across the factors affecting our growth, and for the ones with the greatest likelihood, comment on the severity of their consequences if realized.
Factor | Will this be a constraint in the considered growth scenarios?[48] | Our reasoning |
1) There’s a sufficient amount of important work. | Very unlikely to be a constraint in the no-growth scenario: ~1%
Very unlikely to be a constraint in the moderate-growth scenario: ~2%
Very unlikely to be a constraint in the high- growth scenario: ~3%[49] | Our current research agendas have more promising work than we currently have capacity to answer. Moreover, we find that answering a question on our research agenda often produces more questions and further avenues to work. We expect this situation to continue for at least the period of time considered here. |
2) We do sufficient prioritization to ensure that we're tackling important work. | Pretty unlikely to be a constraint in the no-growth scenario: ~10%
Pretty unlikely to be a constraint in the moderate-growth scenario: ~12%
Pretty unlikely to be a constraint in the high-growth scenario: ~15% | This year we have done more to clarify and formalize our strategies across the organization as well as departments within it, and we have also brought on an executive staff to significantly focus on this. We believe we also have a good prioritization track record, but prioritization is difficult, and it will remain so in the future. However, in our opinion so far we have done well, and don’t see clear reasons to change that assessment for the future. |
3) We find sufficient numbers of highly skilled individuals for us to hire. | Very unlikely to be a constraint in the no-growth scenario: ~1%
Very unlikely to be a constraint in the moderate-growth scenario: ~7%
Pretty unlikely to be a constraint in the high-growth scenario: ~25% | We previously predicted there might be a harder limit here, but from assessing the candidate pool thoroughly and going through a big hiring round this year, we are very confident that there’s further talent out there. By offering competitive remuneration and benefits, we think we will be able to continue to attract this talent.
We are less sure that there are currently adequate numbers of high-performers within the candidate pool in order to satisfy the high-growth scenario. This would depend somewhat on the department area. |
4) We have sufficient people/project management capacity to get and keep our staff working effectively on important projects within important areas. | Pretty unlikely to be a constraint in the no growth scenario: ~10%
Pretty unlikely to be a constraint in the moderate-growth scenario: ~20%
Unlikely to be a constraint in the high-growth scenario: ~35%[50]
| Project management has been sufficient for our purposes so far, and we project that at least continuing. We have also planned improvements on project management processes (e.g. improving categorizing and updating on projects through implementing certain organization-wide Asana practices). We currently have a sufficient number of people managers on staff. For moderate growth, we wouldn’t expect really large increases in management staff but would still expect some increase in their number. For high growth, we would need to seek further managers either through external hires (which we’ve already had some examples of and seem to have been successful with so far, but still somewhat early to say) and/or internal promotions (perhaps identifying and upskilling people who’d be good for this via having potential managers manage fellows). We’d also actively look for ways to offer even further support for new managers in order to maintain operational efficiency. |
5) We have sufficient operations capacity. | Very unlikely to be a constraint in the no-growth scenario: ~2%
Very unlikely to be a constraint in the moderate-growth scenario: ~4%
Very unlikely to be a constraint in the high-growth scenario: ~8% | We are focused on having strong operations, and will continue to proactively prioritize hiring operations staff. Even within the high-growth scenario, we are quite confident we could frontload a number of operations hires to manage the organization’s overall growth. |
6) We have sufficient funds to pay for all of the above. | 0% chance (assumed in all these growth scenarios) | (Assumed in these growth scenarios). |
7) We have sufficient throughput[51] for new staff | Very unlikely to be a constraint in the no-growth scenario: ~1%
Very unlikely to be a constraint in the no-growth scenario: ~4%
Pretty unlikely to be a constraint in the high-growth scenario:~12% | In the no- or moderate-growth scenarios we very likely wouldn’t be hiring enough personnel for this to emerge. Under the high-growth scenario, it would be challenging to hire ~30-40 people in each of the next two years, but it seems like we did it this year. Appropriately staggering hires and frontloading operations hires should largely mitigate this factor even in the high-growth scenario. Throughput is the main constraint for why the “high growth” scenario cannot be even higher (conditional on receiving funds). |
8) We have sufficient organizational culture and morale so that existing staff feel comfortable with the growth | Very unlikely to be a constraint in the no-growth scenario: ~6%
Pretty unlikely to be a constraint in the moderate-growth scenario: ~11%
Pretty unlikely to be a constraint in the high-growth scenario: ~20% | We believe we have a good grasp of the organizational culture and morale, and work on it as needed, and that, crucially, no critical issues have been reported. We also have communication pathways in order to detect things in relatively early stages. And we are agile enough to then address them while in early stages in a relatively quick yet still positive manner.[52] Scaling so far has produced only minor issues and we’ve tripled our size over the past two years. Again, placing an emphasis on hiring a large number of operations staff should also help mitigate the risk here. |
To briefly summarize, of the eight factors we have identified that could affect our ability to accommodate different rates of growth:
- We think a lack of important work (1), or of operations capacity (5) seem almost unanimously very unlikely across the growth scenarios we are considering here.
- Of these, the ones with the greatest likelihood (but still very unlikely) is a lack of ops capacity (5). But we think addressed at early stages, it is relatively easy to stay ahead of any operational challenges.[53]
- The factors regarding the amount of important work (1) and proper prioritization (2) also seem to apply to very similar extents regardless of the growth rate (i.e. our subjective credences on them arising change relatively little across considered growth scenarios), so we don’t consider them strong factors in favor of lesser growth rates.
- Regarding the proper prioritization (2), conditional on this occurring, we think it’s likely to be okay. It would most likely mean, that for a few weeks to a few months we conduct work that, according to the worldview it is addressing, isn’t very promising, before realizing the error, and then discontinuing that work.
- Struggling with finding adequate numbers of highly-skilled individuals (3) only has a somewhat significant chance within the high-growth scenario. If this comes up, there are ways to address it.[54] Failing to find these highly-skilled individuals would cause us to slow our growth until we do. By taking that approach, this factor would constrain growth—but it is not a risk of growing.
- In the growth scenarios considered here, we are also assuming lack of funding (6) doesn’t apply.
- Sufficient throughput (7) seems only really to be an applicable factor in the high-growth scenario. Within that scenario though, it’s rather likely we could still adequately stagger hires over the time frame to avoid this, and it likely isn’t that much of a major consequence in our view if throughput issues were to arise. Insufficient throughput would likely just somewhat delay further expansion, while we continue our work to bring new hires up to speed.
- We acknowledge the remaining factors (4 and 8) seem to require some work to further address under both the moderate-growth and high-growth scenarios, and that these seem to be where the greatest downside risk is, but even then:
- We have already relatively concrete plans for increasing people and project management capacity (4).
- And via the likes of continuously consulting external parties, and doing our own internal monitoring and assessing of the matter (e.g. staff surveys), welcoming feedback, and then prioritizing what arises, we feel relatively confident that we can deal with organizational culture (8).
Some Reasons to Consider Funding Rethink Priorities
- Each year we have many well-qualified applicants that we would like to hire but that we do not, mostly due to lack of funds.[55], [56] Additional funding will really make a key difference in RP’s trajectory over the next two years.
- Given the discontinuation of the FTX Future Fund, we need to identify new donors who support our mission. Thankfully, the solid runway that we have built allows us not to feel existentially threatened as an organization in this new situation, but some of our most promising new projects will now have to be delayed until we have found new funders.
- We have a track record of producing actionable research that has informed decisions worth millions of dollars.
- Our work amplifies the impact of several key effective altruist and longtermist organizations.
- RP has been trusted by EA Funds and Open Phil and others to start new projects (e.g. on capacity for welfare of different animal species) and launch entire new teams (such as on AI governance).
- The above-mentioned and other large organizations often don’t fully fund our needs in any particular area because they trust our ability to find other sources of funding. Therefore, we rely on a broad range of other donors to continue our work.
- Support outside of our main funders will provide more resilience against the risk of a major funder changing direction, as well as more independence to pursue our research agenda without fear of a major funder pulling out. Unrestricted funding in particular is immensely valuable for us in building a robust, stable, and effective organization.[57]
- We provide value to the entire EA community through public analyses, the EA Survey, helping people with ad hoc analysis requests, and training new EA researchers. However, such benefits are hard to fundraise for: they don’t help any one particular funder enough to make them want to fund this work, and the primary beneficiaries often have limited ability to support us. A diverse donor base will help us show that there is strong support for these community-wide benefits that we provide, and will also keep us accountable to continue delivering value to the EA community as a whole in addition to our more specific stakeholders.
How to Give
We believe we are entering 2023 prepared to do more important research than ever before, and with the ability to continue scaling. We are excited about where we could go with your support.
If you’d like to help fund our work, you can donate directly to us here. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers. If you have questions about donation opportunities, please email or book a meeting with our Director of Development, Janique Behman.
Appendix
For interested readers, here’s a fuller list of milestones and accomplishments across the global priority areas that we were active in this year:[58]
Animal Welfare
- We[59] consulted with Open Phil in various capacities. This included:
- Researching the use of quantitative estimates to assess the effectiveness of different farmed animal welfare interventions.
- Looking at literature regarding the relative importance of the severity and duration of pain and whether the trajectory of pain matters.
- We completed our initial results on interspecies comparisons of moral weight. This involved working with a team of 12 academic contractors to ambitiously review 95 welfare-relevant traits across 11 animal species.
- As part of this, we’ve also presented on this work to a few dozen key stakeholders and donors, and we are planning to publish a number of reports, including:
- The Welfare Range Table (published Nov 7th)
- Theories of Welfare and Welfare Range Estimates (published on November 14th)
- Using Neuron Counts as Proxies for Animals’ Relative Moral Weights (to be published by November 28th)
- Capacity for Welfare and the Unity of Consciousness (to be published by December 5th)
- Phenomenal Unity (to be published by December 12th)
- Preliminary Welfare Range Estimates (to be published in January 2023)
- Probability of Sentience and Welfare Ranges across Insect Life Stages (to be published in January 2023)
- As part of this, we’ve also presented on this work to a few dozen key stakeholders and donors, and we are planning to publish a number of reports, including:
- We continued work on the welfare of invertebrates such as insects and shrimps, drawing attention to these neglected species which are farmed by the trillions.
- We published in academic journals on brain development and welfare concerns in black soldier flies raised for food and feed.
- We're currently researching the welfare of mealworms, the second most farmed species to produce insect protein. Our findings will be published in the coming months (early 2023).
- We're also about to launch an empirical project to research the nutritional preferences of black soldier flies, in collaboration with academic collaborators.
- We significantly researched the economic prospects of the insect industry and hope to publish on this in coming months.
- We worked to identify policy and legislative opportunities for farmed insects in the USA and the UK. This project included work with a contractor to interview lobbying firms to get a sense of the political tractability of a few policy asks aimed at slowing down the growth of the insect industry.
- We organized a successful symposium on insect welfare at the 2022 Joint Annual Meeting of the Entomological Society of America, the Entomological Society of Canada, and the Entomological Society of British Columbia.
- We also presented on farmed black soldier fly welfare at the Insects as Food and Feed Conference in the UK.
- We drafted and submitted an opinion to a public consultation regarding placing insect food products on the UK market, and drafted two submissions on insect production for a public consultation by USDA and USAID.
- We continued incubating a new organization that will tackle several challenges associated with the use of insects as food and feed (and recently hired an Executive Director for this project).
- We presented to the Environmental Commission of the Chamber of Deputies in Chile, and participated in the debate on animals being legally recognized as sentient individuals. After the debate, the bill was initially approved, and more recently formally passed the commission, with RP cited as one of the reporting parties.
- Continued our research into shrimps and also consulted with an organization about shrimp welfare interventions.
- In addition, we also published on the determinants of adopting international voluntary certification schemes for farmed fish and shrimp in China and Thailand.
- We continued to examine tractable interventions to improve wild animal welfare that could be robustly positive in impact, and could potentially demonstrate techniques on a small scale that could be important at a larger scale in the future.
- We submitted a report and presented the optimization analyses for rabies vaccination using oral baits at a workshop on modeling epidemics for mathematical ecologists.
- We published part 1 of a series of reports on the harms to rodenticides as well as humane alternatives, The Rodent Birth Control Landscape.
- We started two surveys on rodenticide use—one that will estimate the prevalence of awareness of rodenticide harms, and another that will poll for support/opposition to various legislation that would reduce rodenticide use. We collaborated with an external researcher to supplement these surveys with focus groups.
- We consulted with an aide for a Washington D.C. council member who is working on a city-wide rodent control bill.
- We investigated the use of AI-assisted technology in human-wildlife conflict mitigation.
- We investigated the potential suffering that wild spongy moths incur during invasive outbreaks and future research to prevent it (to be published by the end of the year).
- We also looked into various strategic considerations for wild animal welfare, including exploring different interventions that could help wild animals, such as reducing aquatic noise.
- We hosted an inter-organization meeting on wild animal welfare.
- Other events we hosted include:
- An academic conference on interspecies comparisons of welfare (recordings available) together with the ASENT project at the London School of Economics.
- The inaugural Effective Animal Advocacy Coordination Forum to further connect and discuss strategy issues for the movement.
- And next year we plan to host a workshop in Vancouver to bring together animal welfare scientists, pain scientists, and philosophers to discuss how best to compare harms suffered by farmed animals that vary in severity and duration.
- We continued to lead regranting to opportunities for improving European Union (EU) farmed animal protection policies with negotiations starting in late 2023.
- We finished a report on institutional meat reduction for a large anonymous funder and hope to make this public soon.
- We continued our work regarding cultured meat by publishing the cultured meat forecasting report which looked at estimates for cultured meat production, and we launched a related Metaculus tournament and published an essay on the reasoning.
- We also co-published on megaprojects for animals.
- We published a report assessing the effectiveness of documentaries in reducing animal product consumption.
- We studied the effects of cage-free commitments on hen housing across 41 countries and submitted a draft about the results to a journal.
- We explored the role of price, taste, and convenience in selling plant-based meat.
- We're working on a systematic review of interventions to reduce animal product usage.
- We're studying goal-setting meat reduction programs for institutions.
- We've been investigating how international voluntary certification schemes for farmed fish affect production practices in Asian upper-middle-income countries.
- We continued polling support for a ban on slaughterhouses in the United States, which challenged previous survey results on this topic.
Global Health and Development
- We produced numerous research reports for Open Phil assessing the potential of global health and development interventions, looking for interventions that could be as or more cost-effective as the ones currently ranked top by GiveWell. This included full reports on the following:
- The effectiveness of large cash prizes in spurring innovation (the report was also shared with FTX Future Fund, and another large foundation).
- The badness of a year of life lost vs. a year of severe depression.
- Scientific research capacity in sub-Saharan Africa.
- The landscape of climate change philanthropy.
- Energy frontier growth (this report explores several of the key considerations for quantifying the potential economic growth benefits of clean energy R&D).
- Funding gaps and bottlenecks to the deployment of carbon capture, utilization, and storage technologies.
- A literature review on damage functions of integrated assessment models in climate change.
- A confidential project that we won’t give further details on.
- Detailing the process of the World Health Organizations’s prequalification process for medicines, vaccines, diagnostics and vector control, as well as the potential impact of additional funding in this area.
- Describing the World Health Organization’s Essential Medicines List and the potential impact of additional funding in this area.
- Whether Open Phil should make a major set of grants to establish better weather forecasting data availability in low- and middle-income countries (LMICs).
- Further examination of hypertension, including its scale and plausible areas a philanthropist could make a difference.
- We also analyzed the cost-effectiveness of anti-deforestation initiatives for an anonymous donor.
- Worked with GiveWell to provide an overview of what is currently known about the exposure to lead paints in LMICs.
- And begun working with a key stakeholder on an anonymous project involving the pricing of a commodity.
Longtermism
Note that both of our Longtermism teams (i.e. the General Longtermism team and the AI Governance and Strategy team) essentially came into existence this year. As such, much of their work is in-progress and/or will remain non-public due to confidentiality or information hazard considerations, and hence, in many cases, isn’t mentioned here. Also, this section mentions some work that staff members completed and that’s relevant to our work even if it wasn’t necessarily an RP project or done using RP hours.
AI Governance and Strategy (AIGS) team
We set up a team to study AI governance and strategy (starting around Q4 2021) and grew it to 12 people (including Fellows, contractors, and a Research Assistant).
Our team members’ public or easily explainable outputs so far include the following:
- With help from people including RP’s Special Projects team, AIGS team members organized the Long-term AI Strategy Retreat in September 2022 (see here for more info).
- An AIGS team member and the Center for the Governance of AI collaborated to develop a platform for the AI governance community to find and share documents that are nonpublic (e.g. due to information hazards; see here for more info).
- Team members published:
- Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination,
- Using the “executive summary” style: writing that respects your reader’s time,
- AI & antitrust/competition law - relevant readings, people, & notes, and
- Interested in EA/longtermist research careers? Here are my top recommended resources.
Ongoing projects include the following: (Note: this list isn’t comprehensive and some of these will soon result in public outputs.)
- Developing what’s intended to be a comprehensive database of AI policy proposals that could be implemented by the US government in the near- or medium-term. This database is intended to capture information on these proposals’ expected impacts, their levels of consensus within longtermist circles, and how they could be implemented.
- Planning another Long-term AI Strategy Retreat for 2023, and potentially some smaller AI strategy events.
- Thinking about what the leadup to transformative AI will look like, and how to generate economic and policy implications from technical people’s expectations of AI capabilities growth.
- Mentoring AI strategy projects by promising people outside of our team who are interested in testing and building their fit for AI governance and strategy work.
- Preparing a report on the character of AI diffusion: how fast and by what mechanisms AI technologies spread, what strategic implications that has (e.g. for AI race dynamics), and what interventions could be pursued to influence diffusion.
- Surveying experts on intermediate goals for AI governance.
- Investigating the tractability of bringing about international agreements to promote AI safety and the best means of doing so, focusing particularly on agreements that include both the US and China.
- Investigating possible mechanisms for monitoring and restricting possession or use of AI-relevant chips.
- Assessing the potential value of an AI safety bounty program, which would reward people who identify safety issues in a specified AI system.
- Writing a report on “Defense in Depth against Catastrophic AI Incidents,” which makes a case for mainstream corporate and policy actors to care about safety/security-related AI risks, and lays out a “toolkit” of 15-20 interventions that they can use to improve the design, security, and governance of high-stakes AI systems.
- Experimenting with using expert networks for EA-aligned research.
- Trying to create/improve pipelines for causing mainstream think tanks to do valuable longtermism-aligned research projects, e.g. via identifying and scoping fitting research projects.
In collaboration with the Special Projects team, the AI Governance and Strategy team also supported and advised Epoch, a promising new organization forecasting the development of advanced AI.
General Longtermism team
We set up a team that was initially dedicated primarily to doing the research and other work necessary to facilitate faster and better creation of longtermist megaprojects—projects that we believe have a decent shot of reducing existential risk at scale (spending hundreds of millions of dollars per year). This was done in large part to work with the FTX Future Fund. Now that the fund no longer exists, we are looking for other funders who would be interested in this work. We are considering continuing our work facilitating such projects as the most promising projects we identified so far do not have exceptionally large funding needs (i.e., can be done with millions of dollars rather than hundreds of millions). We are also considering other possible research directions. Starting around Q4 2021, we grew the team from one to seven people (including Fellows, Researchers, and a Research Assistant).
Concerning our public output, this year our team has:
- Written a post on ways forecasting can improve the long-term future and another one on ideas for an early-warning forecasting center.
- Contributed to the coordination around building civilizational refuges in the event of an extreme global catastrophe.
- Organized Condor Camp, a 10-day retreat for some of the most talented university students in Brazil to learn about EA and longtermism. The project secured funding to expand EA and longtermist talent outreach in Brazil next year.
- Investigated nanotechnology strategy research as an EA cause area and advanced the area by brainstorming ways to create a directory of resources for limited circulation.
- Established EA Pathfinder, a project to advise mid-career professionals on how they can reorient their careers into effective altruism.
- Written up initial thoughts on the U.S. Bipartisan Commission on Biodefense’s Apollo Program for Biodefense, which proposes radically changing biosecurity and pandemic defense through technical innovations.
- Delivered multiple sessions at various EA conferences.
Besides continuing several of the initiatives mentioned above, we also worked on:
- An analysis and ranking of various longtermist megaproject ideas, including cost-effectiveness modeling.
- Evaluating more than 20 fiscal sponsorship applications with the Special Projects team.
- Quick investigations of cost-effective ways to accelerate biosecurity projects, like the mass deployment of air sterilization technologies, super PPE, and better longtermist coordination against biorisk.
- Quick investigations of the viability of setting up different AI governance-related organizations and projects in fields like AI auditing, ethics, and alignment prizes.
Surveys and EA Movement Research
- We once again launched the EA Survey, and continued to analyze data and developments in the effective altruist community through the annual survey (we are currently collecting responses for this year’s edition!).
- We were commissioned to start running a new monthly representative survey of the US (EA Pulse), investigating public attitudes regarding various questions relevant to EA (including awareness of EA and support for different causes).
- This survey would also allow us to rapidly field ad hoc requests from EA orgs (e.g. to poll support for different policies or responses to different messages).
- We have published the first report in a series estimating the number of people who have heard of EA, based on a representative sample of adult Americans.
- We are continuing to work with 1Day Sooner on a project looking at support or opposition to human challenge trials among the public and relevant experts.
- We ran a series of studies as part of an academic project to develop a validated measure of attitudes towards wild animal welfare.
- We ran over 10 surveys for different core EA/longtermist orgs to inform their work (e.g. via testing the most effective messages for outreach).
- We conducted two major data analysis projects for a core EA org.
- We conducted several polls and surveys to inform anonymous policy groups.
- We conducted a larger number of bespoke analyses for a variety of different EA and EA-adjacent decision-makers and researchers and provided pro bono consultation.
Special Projects
- We launched our Special Projects program in July, and have begun accepting applications for new projects to fiscally sponsor.
- We began fiscally sponsoring Epoch, and supported their first hiring round.
- We provided operational support to the Long-term AI Strategy Retreat.
- We provided operational support to Condor Camp, focused on EA community building for promising students from Brazil.
- We provided fiscal sponsorship to Unjournal and EA Market Testing.
- We began evaluating operational constraints on potential longtermist megaprojects.
- We began a hiring round to expand the capacity of our team to take on more projects.
Credits
This post was written by Kieran Greig. With contributions from Michael Aird, Janique Behman, Marcus A. Davis, Laura Duffy, Carolyn Footitt, David Moss, Rachel Norman, Abraham Rowe, Daniela Waldhorn, and Peter Wildeford. If you like our work, please consider subscribing to our newsletter. You can see more of our work here.
- ^
All dollar amounts in this post are in USD. This amount and those in the next sentence of this footnote include expenditures and revenue for our special projects. The predicted revenue for 2022 is ~$12M with assets of ~$10.3M by year-end (excluding pledges that we have not yet received).
- ^
This post further outlines the consultancy model:
“At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily "loan" their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn't make sense for the client to hire themselves, or fill the ranks of a new administration.”
- ^
Please subscribe to our newsletter if you want to hear about job openings, events, and research.
- ^
Note that we also worked, to differing extents, with close to 30 contractors throughout the year.
- ^
These 55 full-time equivalents (FTE) include 40.5 FTE focused on research, 11.5 FTE on operations and communications, and 3 FTE on Special Projects: focused on fiscal sponsorship and new project incubation.
- ^
Our team will have actually worked close to 80,000 hours this year!
- ^
The financial allocation across departments fairly closely matches the time distributions.
- ^
This team will examine crucial questions that may have a huge bearing on how philanthropic resources will be allocated across humans and non-human animals in present and future generations. One such project, already underway, is our work on interspecific comparisons of moral weight.
- ^
Note that we have proportionately split our operations teams costs across all these areas.
- ^
One assumption used in these estimates is that they do not include any likely future revenue. That could make this particular funding gap estimate hard to interpret, as we are currently undergoing the grant renewal process with the funder who has previously totally covered the costs of this department. The results of that conversation could shift this estimate heavily.
- ^
Some further brief context on the organization's history is that Peter Wildeford and Marcus Davis (our Co-CEOs) had considered starting an organization for a number of years before starting Rethink Priorities, but their initial funding request was rejected. From prior work, Peter and Marcus knew that having a clear impact through EA-related research was possible. They launched RP in 2018 with $12,500 of self-funding and as a six-month experiment. Initially, RP was fiscally sponsored by Rethink Charity.
In 2019, the RP team significantly expanded. At the time, they mainly focused on animal welfare because it was an area in which high-quality research seemed neglected, there was low-hanging fruit, and many organizations on-the-ground were bottlenecked on strategic insight into what interventions and programs work best. That year was defined by our newly expanded team conducting some high-quality work that impressed others. Based on this reputation, we were able to attract significant funding from Open Phil, EA Funds, and several other donors.
In 2020, Rethink Priorities amassed significant funding and started spending ~$750,000 a year, with a staff of 16, and also spun out of Rethink Charity as an independent organization. In 2021, we had 28 staff and a $2.1M budget. Upon the request of key stakeholders in the EA community, in 2021 we started to launch teams to address AI governance and strategy as well as global health and development, and more significantly expanded our operations team.
- ^
To be clear, this year RP started a new Special Projects team (housed within the Operations Department) to support initiatives, which will include incubating projects. Given our strong operations, RP envisions acting as a full-service fiscal sponsor for select promising EA groups. This structure could enable strong teams to focus on their core work rather than the day-to-day operations of their organization.
- ^
Around 35 leading researchers and practitioners attended this retreat. We prepared for the retreat by surveying attendees on their views, organizing seminar discussions to build common context, and doing research work to clarify key strategic ideas. According to a survey at the end of the retreat, on average, the attendees found the retreat many times more valuable than the counterfactual use of their time, were very satisfied with the experience, and were excited for a sequel retreat in 6-12 months. Other outcomes include:
- Exit survey results summarized attendees’ views on ~30 important strategic topics after LAISR discussions.
- Attendees have formed discussion groups and reached out to non-participants to discuss new project ideas.
- We’re starting to share anonymized copies of discussion/talk notes from LAISR with some non-attendees who would benefit from them.
- We’re doing further research on some questions highlighted as important at the retreat.
- ^
Condor Camp was a 10-day retreat for 13 talented Brazilian university students to learn about EA and longtermism. They were selected for accomplishments not related to EA, such as international STEM olympiad medals. According to our pre/post surveys, participants’ familiarity with and interest in topics like EA, AI safety, and longtermism increased considerably. Three months after the camp, at least seven of them have engaged in impactful activities, such as founding the first EA university group in Brazil, at the University of São Paulo, and reaching the final stages of other high impact programs, like the Open Philanthropy Undergraduate Scholarship. Our team has also been working with other movement builders in Colombia and elsewhere on local efforts and regional strategy in Latin America. This project is interested in further funding to continue with its next stage.
- ^
Note reflections in this section mainly refer to those of leadership. Staff of the organization may have independent views.
- ^
By a consultancy, we mean doing commissioned work in response to demands from EA-aligned organizations.
- ^
In those cases, the deliverable was directly shared with that one particular stakeholder upon completion.
- ^
Though we would also like to note that using the consultancy model this year did contribute to some backlog of reports that we would like to publish publicly, which we are now working on. We are also thinking more about how to further decrease the lag between when a project is ready for a stakeholder and when we might be able to publish it.
- ^
We will use this stakeholder input to inform whether we pursue further projects with relatively long impact timelines, and potentially ones involving relatively large teams of contractors, too.
- ^
We are one of the only groups who have worked on this seriously, and we think we have produced foundational pieces of work here. We think this positive track record contributed to our recent graduation from receiving grants from the EA Animal Welfare Fund to cover this work to receiving a larger restricted grant from Open Phil to further scale this work.
- ^
We initially scaled in that area back in 2018-19.
- ^
For instance, as previously mentioned, adopting a consultancy model with Open Phil within the Global Health and Development department.
- ^
For instance, informing and incubating potential priority projects. We expand on some further reasons as to why we want to diversify in Our North Stars and Some Key Medium-Term Considerations.
- ^
For instance, within Global Health and Development, we have at least contributed to the following:
- Open Phil using RP’s work in their medium-depth climate research.
- Open Phil recommending GiveWell to add weather forecasting to their study of digital extensions for agriculture, an addition for which Open Phil is willing to pay.
We are also excited about all the items mentioned in the above summary section with regard to Longtermism. For instance, we think the AI Governance and Strategy team’s survey on immediate goals for AI governance could (when published) also be quite influential for Open Phil’s grantmaking.
- ^
For instance, a rather large majority of the Survey and EA Movement Research Department’s projects are private requests (for e.g. surveys, experiments, polling, and focus groups) from core EA organizations, with the rate of requests having increased substantially in recent months. However, we presently have to turn down some large commissions due to lack of staff capacity, and lack of adequate restricted funds in place to expand our team (or even to maintain the team at its current size). We turn down some large commissions because the vast majority of projects requested of us are highly time sensitive (i.e. organizations want them completed within a very fast timeline), so we need to have the staff already in place if we’re to take them on, as it’s not possible to hire staff in time to complete them even if they are offering more than enough funding to make it happen. All that said, we would still encourage organizations to approach us to see whether we have capacity for any particular project.
- ^
We do feel there are improvements we can make but it is somewhat beyond the scope of this post to go over them here.
- ^
We believe that via hosting events we can make significant progress on coordination within some global priority areas, and chip away at key strategic uncertainties within them. This is still to be defined but other areas in which we may want to lead events within include a biosecurity-focused event or a series of events examining AI strategy conditional on assuming some transformative AI timelines.
- ^
Although we don’t report on outcomes of our work more formally here, we hope we still convey insightful summary data, and offer some useful initial reflections, and interpretations.
- ^
Over the past month, we’ve conducted structured interviews with some key decision-makers and leaders at EA organizations that either use our work or that we want to use our work. As in previous years, we sought interviewees’ feedback on the general importance of our work for them and for the community, what they have and have not found helpful in what we’ve done, what we can do in the future that would be useful for them, and ways we can improve. To encourage frankness, interviewees were promised that the details of these conversations would not be made public.
- ^
These assessments so far often look at how much money we spent in order to complete some project, and then attempts to evaluate the relatively direct amount of funding we think it influenced, drawing upon various reports from the funders in question over the years.
- ^
It could be worth noting that within the staff at RP, views differ significantly as to how much we should prioritize different areas.
- ^
Note that these apply to differing extents across departments and even differing extents within the teams in a department. There are also some long-term goals that apply to a lesser degree than those mentioned in the text, but are still ones we are quite interested in. Such long-term goals that are still of significant interest to RP include:
- Helping to grow the EA Community,
- Enhancing the talent pipeline for addressing global priorities,
- Raising standards within relevant fields, and
- Helping discover new global priorities.
- ^
For example, potentially further looking at often used moral weights within the sector, and/or the impact of using some subjective well-being measures.
- ^
Our non-AI work was previously heavily focused on working with FTX Future Fund. Now that that team has disbanded, we’re currently discussing this and other ideas with many relevant stakeholders about what to do next, and would strongly consider their views when making such decisions.
- ^
One such project, already underway, is our work on interspecific comparisons of moral weight.
- ^
Or, failing totally unrestricted funds, donations restricted to a cause area, but not to a specific project within that area or a specific subfield within it would be especially useful.
- ^
One example of this is some of our earliest work on invertebrate welfare. That was an area where, to a significant extent at first, we could only work on because we had enough unrestricted runway to self-fund ourselves doing so. We think that through this work we became an early mover within that important area and are now perhaps the single group that has most advanced that sub-field.
- ^
There are cases where we would not continue to allocate internal funding resources to an area if it seemed like there wasn’t “market” demand for a service.
- ^
The simplifying assumptions that we use to arrive at these rough and approximate point estimates were:
- We split overhead costs (including administrative expenses, communications and fundraising costs) across the different areas proportionately to the non-overhead budget across the different areas.
- Similarly, we split unrestricted reserves proportionately across the different departments, to add to their already amassed restricted runway.
- We aim to reach 18 months of operating reserves for every single area.
- We are including a 15% increase in existing costs mostly because cost of living raises and inflation are high right now.
- Within scenarios, for simplicity, we also assume uniform growth both across departments and time.
- We are not including funding towards, or hires for, any special project or incubated project.
- And we don’t account for revenue that we think we are likely to receive in the future.
Note this last point contributes to these estimates being tricky to interpret. For instance, in some areas we are currently undergoing the renewal process for major grants. Pending the outcome there, some area-specific gaps may decrease by a million dollars or more.
- ^
That said, we are still more than happy to engage in discussion if a funder is interested in restricting to the Special Projects team specifically.
- ^
Worldview Investigations will have a different growth trajectory to other departments. Next year will be its year of establishment, with only ~1 FTE initially. If we were to moderately expand the team from there, it would be ~1 FTE more hired in 2023, but starting 3 months into the year so ~0.75 FTE expenditure over the year. To achieve the high-growth scenario from there, next year we would also do significant work with contractors, equivalent to another ~1 FTE in expenditure. Combining that with existing staff, that's ~2.75 FTE in 2023 under the high-growth scenario. The moderate growth scenario through 2024 would be to reach that same level that year. The high-growth scenario into 2024 would then be for both of those staff managing ~3-4 FTE contractors, for a total of ~8-10 FTE in 2024.
- ^
This is the same total FTE including temporary staff and contractors across the year.
- ^
Note this includes another hiring round for our Longtermism department. We expect to make roughly 3 FTE hires as a result of that hiring round. The Special Projects team is also currently conducting a hiring round.
- ^
Although the expenditure this year for longtermism was similar to animal welfare, the amount required for longtermism under the no-growth scenario is less because we have amassed more restricted grants for this area. In general, the amounts in other areas may seem higher or lower relative to our expenditures on them, because it has been harder or easier to fundraise in those areas.
- ^
If we were to receive a much larger amount of unrestricted funding relative to our overall operating expenses, then we would give a more detailed rationale for our likely future spending priorities.
- ^
For example, as previously mentioned, growth is a key consideration for us in the medium-term and scaling sustainably is one of our north stars.
- ^
Note that there is some significant heterogeneity here. That is, the respective factors that could be growth constraints do seem to significantly differ in their likelihood of applying across our departments.
- ^
Note the credences reported here aren’t independent of one another, so it doesn’t follow that our overall confidence in one or another growth scenario can be estimated by multiplying them through. There are situations where one factor could cause other factors to be constraints. For instance, lacking operations capacity could cause low morale.
- ^
In particular this has a different likelihood in different domains, but we don’t think it's much higher in any given area, not above ~10%.
- ^
Once again, we want to emphasize that there is some significant heterogeneity across our departments. That is, this factor seems to significantly differ in their likelihood across the departments of our organization. The subjective credences here are for a rough weighted average chance across the entire organization.
- ^
For instance, even if we have 10 people we want to hire, and have the management and operations capacity to have them do good work, it will still take time for people to join, get onboarded, become productive, etc.
- ^
This could include creating specific committees within the organization to help address any apparent needs. For instance, this year we had a committee lead the process to finalize our values. We also established a permanent committee focused on justice, equity, inclusion, and diversity.
- ^
We currently have strong operations. We would front-load operations hires in growth scenarios, or even use contractors for some specific needed operations work.
- ^
For example, we could extend hiring rounds, hire another recruitment specialist, or some further operations staff.
- ^
As some further indication, for the 32 positions that we hired for this year, we received in total a few thousand applications, and we’ve received an average of >100 responses to each of three expression of interest forms (for our AI Governance and Strategy, General Longtermism, and Special Projects teams). We see those all as signals that there is strong demand to work with us in various capacities, even if only a relatively small fraction are acted upon.
- ^
This doesn’t necessarily apply to the same extent across our departments. For instance, the Longtermism department has likewise had many well-qualified applicants, but in 2022 has not been notably constrained by funding.
- ^
However, to be clear, we do accept and track restricted funds by cause area if requested by donors.
- ^
The following list contains most of our work, but it is not quite a fully comprehensive list of all of RP’s research this year (due to confidentiality or informational security concerns), especially for the Longtermism department. Also, due to the timing of when we do our year in review posts, it includes some work that happened since our last year in review post (November 2021) but technically didn’t occur in the 2022 calendar year.
- ^
Throughout this list “we” is often used to denote when one or more members of RP were/are involved.
James Ozden @ 2022-12-08T20:02 (+5)
Thanks Kieran, this is very interesting! I would also be quite keen to hear about what RP has tried and didn't go so well. For example, I know RP has been trying to launch the Insect Welfare Project (or whatever it'll be called ultimately) since early 2021 or so. I know you had to re-hire for this recently as the last executive director left, so I was wondering if you could share some learnings from that? For example:
- How come it's taken longer than anticipated for the Insect Welfare Project to launch and start running programs?
- What mistakes do you think you made in this process?
- How will this change the approach to which you incubate new organisations?
etc. This list is non-exhaustive so would be keen to hear other useful lessons from this experience! Sorry if this comes across as critical, as it's not my intention, but I'm genuinely curious on what's been happening re Insect Welfare Project as it's a much needed endeavour.
Vasco Grilo @ 2022-12-06T15:06 (+4)
Thanks for the post!
Much of our commissioned work in 2022—particularly in Global Health and Development and AI Governance and Strategy—was not published.
Would it be possible to provide further details, maybe using fictitious examples?
kierangreig @ 2022-12-07T09:31 (+9)
Thanks for your engagement!
Yes, for instance, as mentioned in the appendix, some non-fictitious examples for Global Health and Development are:
We produced numerous research reports for Open Phil assessing the potential of global health and development interventions, looking for interventions that could be as or more cost-effective as the ones currently ranked top by GiveWell. This included full reports on the following:
- The effectiveness of large cash prizes in spurring innovation (the report was also shared with FTX Future Fund, and another large foundation).
- The badness of a year of life lost vs. a year of severe depression.
- Scientific research capacity in sub-Saharan Africa.
- The landscape of climate change philanthropy.
- Energy frontier growth (this report explores several of the key considerations for quantifying the potential economic growth benefits of clean energy R&D).
- Funding gaps and bottlenecks to the deployment of carbon capture, utilization, and storage technologies.
- A literature review on damage functions of integrated assessment models in climate change.
- A confidential project that we won’t give further details on.
- Detailing the process of the World Health Organizations’s prequalification process for medicines, vaccines, diagnostics and vector control, as well as the potential impact of additional funding in this area.
- Describing the World Health Organization’s Essential Medicines List and the potential impact of additional funding in this area.
- Whether Open Phil should make a major set of grants to establish better weather forecasting data availability in low- and middle-income countries (LMICs).
- Further examination of hypertension, including its scale and plausible areas a philanthropist could make a difference.
And for AI Governance and Strategy respectively, some examples could include the following:
Ongoing projects include the following: (Note: this list isn’t comprehensive and some of these will soon result in public outputs.)
- Developing what’s intended to be a comprehensive database of AI policy proposals that could be implemented by the US government in the near- or medium-term. This database is intended to capture information on these proposals’ expected impacts, their levels of consensus within longtermist circles, and how they could be implemented.
- Planning another Long-term AI Strategy Retreat for 2023, and potentially some smaller AI strategy events.
- Thinking about what the leadup to transformative AI will look like, and how to generate economic and policy implications from technical people’s expectations of AI capabilities growth.
- Mentoring AI strategy projects by promising people outside of our team who are interested in testing and building their fit for AI governance and strategy work.
- Preparing a report on the character of AI diffusion: how fast and by what mechanisms AI technologies spread, what strategic implications that has (e.g. for AI race dynamics), and what interventions could be pursued to influence diffusion.
- Surveying experts on intermediate goals for AI governance.
- Investigating the tractability of bringing about international agreements to promote AI safety and the best means of doing so, focusing particularly on agreements that include both the US and China.
- Investigating possible mechanisms for monitoring and restricting possession or use of AI-relevant chips.
- Assessing the potential value of an AI safety bounty program, which would reward people who identify safety issues in a specified AI system.
- Writing a report on “Defense in Depth against Catastrophic AI Incidents,” which makes a case for mainstream corporate and policy actors to care about safety/security-related AI risks, and lays out a “toolkit” of 15-20 interventions that they can use to improve the design, security, and governance of high-stakes AI systems.
- Experimenting with using expert networks for EA-aligned research.
- Trying to create/improve pipelines for causing mainstream think tanks to do valuable longtermism-aligned research projects, e.g. via identifying and scoping fitting research projects.
Vasco Grilo @ 2022-12-07T12:44 (+8)
Thanks, and sorry for not having checked the appendix!
It looks like it would be quite valuable to publish that research, even if just as posts which would contain the summary and link to the relevant report, to save time. This would not be possible for the ones containing information hazards, but I hope there will not be many under such conditions.
James Ozden @ 2022-11-25T08:32 (+4)
Formatting note: your footnotes seem to link to an external private Google doc that I can’t view. Might be better to unlink them and leave them as normal footnotes!
kierangreig @ 2022-11-25T09:36 (+5)
Thanks for flagging! I will fix that now :)
EdoArad @ 2022-11-25T16:15 (+2)
(also the inner-doc links inside footnote 46 point to the doc)
kierangreig @ 2022-11-25T16:55 (+1)
Ah, thanks. Fixing that now :)
Gina_Stuessy @ 2022-12-01T04:08 (+1)
Formatting thing: you may have meant to indent some bullets under "Work on any single area can gain from our working on multiple areas:"
I think this b/c it ends with a ":"
kierangreig @ 2022-12-01T05:45 (+2)
You're right! Just updated :)