Open Philanthropy: Our Progress in 2019 and Plans for 2020
By Aaron Gertler 🔸 @ 2020-05-12T11:49 (+43)
This is a linkpost to https://www.openphilanthropy.org/blog/our-progress-2019-and-plans-2020
I'm not affiliated with Open Phil; I'm just cross-posting this because no one else has done so yet.
This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.
In brief:
- We recommended over $200 million worth of grants in 2019. The bulk of this came from recommendations to support GiveWell’s top charities and from our major current focus areas: potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research, and effective altruism.
- We’ve significantly expanded our Operations team and capacity to keep up with a growing organization and growing grantmaking.
- Similarly to last year, we believe there are hints of impact in the causes where our giving is most mature and near-term: criminal justice reform and farm animal welfare. We’ve made some progress on evaluating our work internally, but don’t have publishable material from this yet. We plan to continue to develop our impact evaluation function.
- We continue to work on developing our “worldview investigations” function, which seeks to document — and seek more debate, both internal and external, on — debatable views we hold that play a key role in our cause prioritization. We now have three in-process writeups on the topic of whether there’s a strong case for reasonable likelihood of transformative AI being developed within the next couple of decades. All are in relatively late stages and could be finished (though still not necessarily public-ready) within months.
- We have started to build a team focused on investigating our odds of finding significant amounts of giving opportunities that are stronger than GiveWell’s top charities in the “near-termist” giving category. By this time next year, we hope to have a working model (though subject to heavy revision) of how much we intend to give each year in this category to GiveWell’s top charities and other “near-termist” causes.
- We announced a co-funding partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX and a recent Giving Pledge signatory. This partnership grew out of Ben’s work with the non-profit Effective Giving UK.
- Hiring and outreach will remain relatively low priorities over the coming year as we continue to focus on building our functions for worldview investigations, impact evaluation, and cause prioritization while maintaining our current level of giving.
Progress in 2019
Last year’s post laid out plans for 2019. This section quotes from that section to allow comparisons between our plans and our progress.
Continued grantmaking
Last year, we wrote:
We expect to continue grantmaking in potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, and scientific research and effective altruism. We expect that the total across these areas will be well over $100 million.
We hit our goal of giving well over $100 million across these six programs, and our total giving recommendations (including recommendations to support GiveWell’s top charities) were over $200 million. Some highlights:
- In potential risks of advanced AI, we continued our support for the Machine Intelligence Research Institute and Ought, and welcomed the second class of Open Philanthropy AI Fellows. (The largest recommendation in 2019 was a five-year grant to launch the Center for Security and Emerging Technology at Georgetown, but we included that in our 2018 review.)
- Our biosecurity and pandemic preparedness program, in transition for most of 2019 after new program officer Andrew Snyder-Beattie joined Open Philanthropy in April, renewed our support for the Johns Hopkins Center for Health Security.
- In criminal justice reform, major grants include ongoing support to the National Council for Incarcerated and Formerly Incarcerated Women and Girls, the Alliance for Safety and Justice, The Justice Collaborative, the Texas Organizing Project, and Essie Justice Group.
- In farm animal welfare, major grants include Mercy for Animals, the Good Food Institute, Global Food Partners, and Compassion in World Farming.
- In scientific research, major grants and investments include Sherlock Biosciences, Cincinnati Children’s Hospital Medical Center, Kyoto University, and the Broad Institute.
- In effective altruism, major grants include ongoing support to 80,000 Hours and the Centre for Effective Altruism. These grants and others were part of our new approach to grantmaking in this category.
We also wrote:
By default, we plan to continue with our relatively low level of focus and resource deployment in other areas (e.g., macroeconomic stabilization policy).
Other grants included the Center for Global Development (Global Health and Development), California YIMBY (Land Use Reform), the International Refugee Assistance (Immigration Policy), Employ America (Macroeconomic Stabilization Policy), and the Center for Election Science (other).
Operations
We’ve significantly expanded our Operations team, hiring Rinad Al-Anakrih, Povneet Dhillon, Leena Jones, Kira Maker, Eli Nathan, and Matthew Poe over the last year.
This has been needed, as our grants team now manages significant grant volume — 311 grants in 2019 with a median time of 13 days between grant recommendations and payment — and Open Philanthropy now numbers over 40 people. In addition to building and strengthening our culture and systems, the Operations team has made it easier for us to conduct events such as a retreat for our AI Fellows, has helped us build more robust recruiting processes, has greatly improved our office space, and more.
(Unlike some of the functions discussed below, Operations is a familiar function that doesn’t require much explanation; the relatively brief length of this section shouldn’t be taken as indicating lower importance.)
Impact evaluation
Last year, we wrote:
Our next step on self-evaluation is to build an internal function — which we’re currently calling impact evaluation — that can provide some degree of independent assessment of these portfolio reviews, and of our overall impact in a given area. We expect that it could take substantial time and experimentation before we develop an impact evaluation process that we’re happy with … We don’t have definite, dated goals for this work yet, as it’s at an early stage, but we hope that by 2020 we will have (a) a much better read on our impact for at least 1-2 grant portfolios to date; (b) a plan for beginning to scale the impact evaluation team and process.
We’ve now completed one major case study, and have several smaller writeups in progress, for cases where we think our funding has plausibly led to significant impact. These are internal writeups, and in many cases the content is based on frank conversations with grantees and others in the fields we work in such that it isn’t suitable for publication.
We feel that we are gaining clarity on how our grantmaking has performed in causes such as criminal justice and farm animal welfare (where our giving is relatively mature and seeks relatively near-term results), but we haven’t yet developed a robust, repeatable process for investigating potential cases for impact. Over the coming year, we hope to get to the point where our process is robust enough that we’re comfortable starting to hire further people for the Impact Evaluation team (this means we would have a job description ready, not necessarily that we would have made hires yet).
Worldview investigations
Last year, we wrote:
In 2019, we will be building out a function tentatively called “worldview investigations,” which will be a major priority for new Research Analyst hires. This function will aim to:
- Identify debatable views we hold that play a key role in our cause prioritization, such as the view that there’s a nontrivial likelihood of transformative artificial intelligence being developed by 2036.
- Put concentrated effort into examining the arguments for and against these views.
- Create resources covering the arguments for and against these views as we see them. We have not yet decided what form these resources should take. Our best guess is that they will include Open Phil write-ups with strong reasoning transparency, but they may also include or instead be reports produced by contractors/grantees, recorded conversations covering the arguments for and against these views as we see them, summaries of such conversations, or something else. The goal of these resources will both be to make our own picture more precise and to make it easier for outsiders to understand and critique it, which in turn will hopefully raise the odds that we are able to subject key cause-prioritization-driving views to maximal critical scrutiny. (This could have major benefits whether or not the views withstand such scrutiny; we’d consider it a major benefit if we either changed our minds or caused people who currently disagree to change theirs.)
We expect that it could take substantial time and experimentation before we develop an approach that we’re happy with for worldview investigations … As with impact evaluation, this work is at an early stage and does not yet have definite dated goals, but we hope that by 2020 we will have (a) fairly thorough writeups (not necessarily public-ready) on at least 1-2 beliefs that are key to our cause prioritization; (b) a plan for beginning to scale the worldview investigations team and process.
This work has been significantly more challenging than expected (and we expected it to be challenging). Most of the key questions we’ve chosen to investigate and write about are wide-ranging questions that draw on a number of different fields, while not matching the focus and methodology of any one field. They therefore require the relevant Research Analyst to try to get up to speed on multiple substantial literatures, while realizing they will never be truly expert in any of them; to spend a lot of time getting feedback from experts in relevant domains; and to make constant difficult judgment calls about which sub-questions to investigate thoroughly vs. relatively superficially. These basic dynamics are visible in our report on moral patienthood, the closest thing we have to a completed, public worldview investigation writeup.
We initially started investigating a number of questions relevant to potential risks from advanced AI, but as we revised our expectations for how long each investigation might take, we came to focus the team exclusively on the question of whether there’s a strong case for reasonable likelihood of transformative AI being developed within the next couple of decades.
We now have three in-process writeups covering different aspects of this topic; all are in relatively late stages and could be finished (though still not necessarily public-ready) within months. We have made relatively modest progress on being able to scale the team and process; our assignments are better-scoped than they were a year ago, and we’ve added one new hire (Tom Davidson) focused on this work, but we still consider this a very hard area to hire for.
Other cause prioritization work
Last year, we wrote:
We see our work on impact evaluation and worldview investigations as providing key inputs into our cause prioritization. We don’t plan on doing much other cause prioritization work in 2019, and for the time being we are likely to avoid major growth in our total giving.
Our picture on this front has evolved:
- A key distinction at Open Philanthropy is between long-termist vs. near-termist giving. We’ve previously stated that it’s “reasonably likely that we will recommend allocating >50% of all available capital to giving directly aimed at improving the odds of favorable long-term outcomes for civilization [long-termist giving].”
- While this is still the case, we believe that at some point the annual amount Open Philanthropy spends on near-termist giving will rise significantly. Accordingly, we need all three of the following: (a) a plan for deciding how to divide capital between near-termist and long-termist giving; (b) a plan for cause prioritization within long-termist giving; (c) a plan for cause prioritization within near-termist giving.
- The worldview investigations work we’re doing (discussed above) is crucial for (a) and (b), but not as much for (c). We’ve come to believe that (c) requires a fundamentally different kind of team and set of investigations.
- Alexander Berger is now leading our work on (c), and he conducted a job search for Research Fellows that resulted in two new hires: Peter Favaloro and Zachary Robinson. This team is now working on investigating our odds of finding significant amounts of giving opportunities that are stronger than GiveWell’s top charities in terms of near-term cost-effectiveness, which in turn will help determine what new causes we want to enter and how our annual rate of giving should change over time on the “near-termist” side.
Hiring and other capacity building
Last year, we wrote:
We are in the midst of another round of hiring for our Research Analyst roles, though this round has not been publicly advertised and we aren’t currently taking new applications. Unlike last year, when we took many people on for simultaneous trials, we will probably instead trial a much smaller number of RA applicants per year, with each trial period more customized to each trialist.
We hired only one new Research Analyst in the past year rather than a full round of trialists like we did in 2018. We also hired two Research Fellows and a number of Operations staff (see previous sections), as well as a new Communications Associate, Gabriela Romero.
We highlighted our new hires in this blog post.
Outreach to external donors
Last year, we wrote:
Outreach to external donors will remain a relatively low priority for the organization as a whole, though it may be a higher priority for particular staff.
In November, we announced a co-funding partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX and a recent Giving Pledge signatory. Ben will be providing funds (initially in the $5 million per year range as he gets started with his giving) for Open Philanthropy to allocate to our long-termist grantmaking, which assesses giving opportunities by how well they advance favorable long-term outcomes for civilization. This partnership grew out of Ben’s work with the non-profit Effective Giving UK.
Close partnerships of this type have so far been rare for Open Philanthropy, and pursuing them is still not currently a major organizational priority. However, we aspire to eventually work with many donors in order to maximize our impact. We want to be flexible in terms of relationship structures, and can imagine a variety of different forms.
Additionally, as discussed previously, we have continued to work significantly with other donors interested in particularly mature focus areas where our Program Officers see promising giving opportunities that outstrip their budgets (especially criminal justice reform and farm animal welfare).
Plans for 2020
Our major goals for 2020 are as follows:
Continued grantmaking. We expect to continue grantmaking in potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research and effective altruism, as well as recommending support of GiveWell’s top charities. We expect that the total across these areas will be over $200 million. By default, we plan to continue with our relatively low level of focus and resource deployment in other areas (e.g., macroeconomic stabilization policy).
Impact evaluation. Over the coming year, we hope to get to the point where our process is robust enough that we’re comfortable starting to hire further people for the Impact Evaluation team (this means we would have a job description ready, not necessarily that we would have made hires yet).
Worldview investigations. We expect to continue to build out our worldview investigations function in 2020, as discussed above. This work is at an early stage and does not yet have definite dated goals, but we hope that this year we will finalize the three draft reports mentioned above.
Other cause prioritization work. We now have a team working on investigating our odds of finding significant amounts of giving opportunities in the “near-termist” bucket that are stronger than GiveWell’s top charities, which in turn will help determine what new causes we want to enter and what our annual rate of giving should be on the “near-termist” side. By this time next year, we hope to have a working model (though subject to heavy revision) of how much we intend to give each year in this category to GiveWell’s top charities and other “near-termist” causes.
Hiring and other capacity building will not be a major focus for the coming year, though we will open searches for new roles as needed.
Outreach to external donors will remain a relatively low priority for the organization as a whole, though it may be a higher priority for particular staff.
RyanCarey @ 2020-05-12T18:20 (+45)
- Here's an updated ipynb with OpenPhil's annual spending, showing the breakdown with respect to EA-relevant areas.
My main impressions:
- Having Ben Delo's participation is great.
- OpenPhil and its staff working hard on allocating these funds is absolutely great (it's obvious, yet worth saying over and over again.)
- It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
- The AI OpenPhil Scholarships place substantial weight on the excellence of applicants' supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done - I've only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I've heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I'm as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven't really heard anyone arguing the opposing view - it would be interesting to understand this thinking better.
catherio @ 2020-05-14T09:56 (+6)
Hi Ryan - in terms of the Fellowship, I have a lot of thoughts about what we're trying to do, which feel better suited to "musing, with uncertainty" than "writing an internet comment", so let me know if you want to call/chat about it some time? But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly.
RyanCarey @ 2020-05-14T20:22 (+12)
Hey Catherio, sure, I've been puzzled by this for long enough that I'll probably reach out for a call.
Community effects could still be mediated by the relevance of participants' research interests. Anyway, I'm also pretty uncertain and interested to see the results as they come in over the coming years.
Linch @ 2020-09-06T11:14 (+2)
Have you guys ended up doing this call? If so, do you feel like you have a (compressed) understanding and/or agreement with OpenPhil's position here?
RyanCarey @ 2020-09-06T15:14 (+2)
We didn't do any call yet!
RyanCarey @ 2020-09-06T10:41 (+2)
OpenPhil has introduced early career funding for people who are interested in the long-term future, including AI safety here: https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future
This should cause their overall portfolio of AI scholarships to place more weight to the relevance of research done, which seems like an improvement to me.