Update on Cause Prioritization at Open Philanthropy

By Holden Karnofsky @ 2018-01-26T16:40 (+11)

This is a linkpost to https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy

Last year, we wrote:

A major goal of 2017 will be to reach and publish better-developed views on:

This post gives an update on this work.

The questions we’re tackling here are complex, and we are still far from having a fully developed framework.

However, we do have a tentative high-level approach to these questions, and some rough expectations about a few high-level conclusions (at least as far as the next few years are concerned). Hopefully, laying these out will clarify - among other things - (a) why we continue to work on multiple highly disparate causes; (b) ranges for what sort of budgets we expect in the next few years for each of the focus areas we currently work in; (c) how we decided how much to recommend that Good Ventures donate to GiveWell’s top charities for 2017.

In brief:

Key worldview choices and why they might call for diversification

Over the coming years, we expect to increase the scale of our giving significantly. Before we do so, we’d like to become more systematic about how much we budget for each of our different focus areas, as well as about what we budget for one year vs. another (i.e., how we decide when to give immediately vs. save the money for later).

At first glance, the ideal way to tackle this challenge would be to establish some common metric for grants. To simplify, one might imagine a metric such as “lives improved (adjusted for degree of improvement) per dollar spent.” We could then use this metric to (a) compare what we can accomplish at different budget sizes in different areas; (b) make grants when they seem better than our “last dollar” (more discussion of the “last dollar” concept here), and save the money instead when they don’t. Our past discussions of our approach to “giving now vs. later” (here and here) have implied an approach along these lines. I will refer to this approach as the “default approach” to allocating capital between causes (and will later contrast it with a “diversifying approach” that divides capital into different “buckets” using different metrics).

A major challenge here is that many of the comparisons we’d like to make hinge on very debatable questions involving deep uncertainty. In Worldview Diversification, we characterized this dilemma, gave examples, and defined a “worldview” (for our purposes) as follows:

I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than [other options]; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty …)

Below, we list some of what we view as the most crucial worldview choices we are facing, along with notes on why they might call for some allocation procedure other than the “default approach” described above - and a brief outline (fleshed out somewhat more in later sections) on what an alternative procedure might look like.

Animal-inclusive vs human-centric views

As we stated in our earlier post on worldview diversification:

Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.

(Note: this quote leaves out the caveat that this picture could change dramatically if one adopts the “long-termist” view discussed in a later section. For simplicity, the rest of this section will assume that one is focused on relatively near-term good accomplished rather than taking the “long-termist” view discussed below.)

We’ve since published an extensive report on moral patienthood that grew out of our efforts to become better informed on this topic. However, we still feel that we have relatively little to go on, to the point where the report’s author wasn’t comfortable publishing even his roughest guesses at the relative moral weights of different animals. Although he did publish his subjective probabilities that different species have “consciousness of a sort I intuitively morally care about,” these are not sufficient to establish relative weight, and one of the main inputs into these probabilities is simple ignorance/agnosticism.

There are many potential judgment calls to be made regarding moral weight - for example, two people might agree on the moral weight of cows (relative to humans) while strongly disagreeing on the moral weight of chickens, or agree on chickens but disagree on fish. For our purposes, we focus on one high-level disagreement. The disagreement is between two views:

Handling uncertainty about animal-inclusive vs. human-centric views

If we were taking the “default approach” noted above, we could handle our uncertainty on this front by assigning a subjective probability to each of the “animal-inclusive” or “human-centric” views (or, for more granularity, assigning subjective probability distributions over the “moral weights” of many different species relative to humans) and making all grants according to whatever maximizes some metric such as “expected years of life improved,[4] adjusted for moral weight.” For example, if one thinks there’s a 50% chance that one should be weighing the interests of chickens 1% as much as those of humans, and a 50% chance that one should not weigh them at all, one might treat this situation as though chickens have an “expected moral weight” of 0.5% (50% * 1% + 50% * 0) relative to humans. This would imply that (all else equal) a grant that helps 300,000 chickens is better than a grant that helps 1,000 humans, while a grant that helps 100,000 chickens is worse.

This default approach has several undesirable properties. We discussed these somewhat in our previous post on worldview diversification, but since our thinking has evolved, we list the main issues we see with the default approach below.

Issue 1: normative uncertainty and philosophical incommensurability

The “animal-inclusive” vs. “human-centric” divide could be interpreted as being about a form of “normative uncertainty”: uncertainty between two different views of morality. It’s not entirely clear how to create a single “common metric” for adjudicating between two views. Consider:

These methods have essentially opposite practical implications. Method A is the more intuitive one for me (it implies that the animal-inclusive view sees “more total value at stake in the world as a whole,” and this implication seems correct), but the lack of a clear principle for choosing between the two should give one pause, and there’s no obviously appropriate way to handle this sort of uncertainty. One could argue that the two views are “philosophically incommensurable” in the sense of dealing with fundamentally different units of value, with no way to identify an equivalence-based conversion factor between the two.

This topic is further discussed in Chapter 4 of MacAskill 2014.

Issue 2: methodological uncertainty and practical incommensurability

As stated above, a major potential reason for taking the human-centric view is “being suspicious, methodologically speaking, of estimating moral weights in an explicit (and in practice largely agnosticism-based) framework, and therefore opting for a conventional/’non-radical’ set of views in one’s state of ignorance.” Yet the default approach essentially comes down to evaluating this concern using an explicit (and in practice largely agnosticism-based) framework and embracing whatever radical implications result. It therefore seems like a question-begging and inappropriate methodology for handling such a concern.

It’s not clear what methodology could adjudicate a concern like this in a way that is “fair” both to the possibility this concern is valid and the possibility that it isn’t. Because of this, one might say that the two views are “practically incommensurable”: there is no available way to reasonably, practically come up with “common metrics” and thus make apples-to-apples comparisons between them.

Issue 3: practical considerations against “putting all our eggs in one basket”

We believe that if we took the default approach in this case, there’s a strong chance that we would end up effectively going “all-in” on something very similar to the animal-inclusive view.[7] This could mean focusing our giving on a few cause areas that are currently extremely small, as we believe there are very few people or organizations that are both (a) focused on animal welfare and (b) focused on having highly cost-effective impact (affecting large numbers of animals per dollar). Even if these fields grew in response to our funding, they would likely continue to be quite small and idiosyncratic relative to the wider world of philanthropic causes.

Over time, we aspire to become the go-to experts on impact-focused giving; to become powerful advocates for this broad idea; and to have an influence on the way many philanthropists make choices. Broadly speaking, we think our odds of doing this would fall greatly if we were all-in on animal-focused causes. We would essentially be tying the success of our broad vision for impact-focused philanthropy to a concentrated bet on animal causes (and their idiosyncrasies) in particular. And we’d be giving up many of the practical benefits we listed previously for a more diversified approach. Briefly recapped, these are: (a) being able to provide tangibly useful information to a large set of donors; (b) developing staff capacity to work in many causes in case our best-guess worldview changes over time; (c) using lessons learned in some causes to improve our work in others; (d) presenting an accurate public-facing picture of our values; and (e) increasing the degree to which, over the long run, our expected impact matches our actual impact (which could be beneficial for our own, and others’, ability to evaluate how we’re doing).

Issue 4: the “outlier opportunities” principle

We see a great deal of intuitive appeal in the following principle, which we’ll call the “outlier opportunities” principle:

if we see an opportunity to do a huge, and in some sense “unusual” or “outlier,” amount of good according to worldview A by sacrificing a relatively modest, and in some sense “common” or “normal,” amount of good according to worldview B, we should do so (presuming that we consider both worldview A and worldview B highly plausible and reasonable and have deep uncertainty between them).

To give a hypothetical example, imagine that:

In this hypothetical, the outlier opportunity would be ~1000x as cost-effective as the other top human-centric opportunities, but still <50% as cost-effective as the vast amount of work to be funded on cage-free reforms. In this hypothetical, I think there’s a strong intuitive case for funding the outlier opportunity nonetheless. (I think even more compelling cases can be imagined for some other worldview contrasts, as in the case of the “long-termist” vs. “near-termist” views discussed below.)

The outlier opportunities principle could be defended and debated on a number of grounds, and some version of it may follow straightforwardly from handling “incommensurability” between worldviews as discussed above. However, we think the intuitive appeal of the principle is worth calling out by itself, since one might disagree with specific arguments for the principle while still accepting some version of it.

It’s unclear how to apply the outlier opportunities principle in practice. It’s occurred to us that the first $X we allocate according to a given worldview might, in many cases, be an “outlier opportunity” for that worldview, for the minimum $X that allows us to hire staff, explore giving opportunities, make sure to fund the very best ones, and provide guidance to other donors in the cause. This is highly debatable for any given specific case. More broadly, the outlier opportunities principle may be compelling to some as an indication of some sort of drawback in principle to the default approach.

A simple alternative to the default approach

When considering the animal-inclusive vs. human-centric worldview, one simple alternative to the default approach would be to split our available capital into two equally sized buckets, and have one bucket correspond to each worldview. This would result in allocating half of the capital however one needs to in order to maximize humans helped, and half however one needs to in order to maximize a metric more like “species-adjusted persons helped, where chickens and many other species generally count more than 1% as much as humans.”

I’ll note that this approach is probably intuitively appealing for misleading reasons (more here), i.e., it has less going for it than one might initially guess. I consider it a blunt and unprincipled approach to uncertainty, but one that does seem to simultaneously relieve many of the problems raised above:

As discussed later in this post, I think this approach can be improved on, but I find it useful as a starting-point alternative to the default approach.

Long-termist vs. near-termist views

We’ve written before about the idea that:

most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.

In the >3 years since that post, I’ve come to place substantially more weight on this view, for several reasons:

I characterize the “long-termist view” as combining: (a) population ethics that assigns reasonably high moral weight to the outcome of “a person with high well-being, who otherwise would not have existed” (at least 1% relative to e.g. “a person has a high well-being, who would otherwise have had low well-being”);[8] (b) methodological comfort with statements such as “Good philanthropy may have a nontrivial impact on the likelihood that future civilization is very large (many generations, high population per generation) and very high in well-being; this impact would be very high relative to any short- or medium-term impact that can be had.” I believe that in practice, one who is comfortable with the basic methodological and moral approach here is likely to end up assessing grants primarily based on how well they advance the odds of favorable long-term outcomes for civilization.

(One could also reach a long-termist conclusion (assessing grants primarily based on how well they advance the odds of favorable long-term outcomes for civilization) without accepting the population ethics laid out in (a). However, I feel that this would likely require an even greater degree of (b), methodological willingness to give based on speculation about the long-term future. For example, one might devote all of one’s effort to minimizing the odds of a very large long-term future filled with suffering or attempting to otherwise improve humanity’s long-term trajectory. From a practical perspective, I think that’s a much narrower target to hit than reducing the odds of human extinction.)

An alternative perspective, which I will term the “near-termist view,” holds some appeal for me as well:

A “near-termist” view might call for assessing grants based on the amount of good done per dollar that could be observable, in principle, within the next ~50 years;[9] or perhaps might be willing to count benefits to future generations, but with a cap of something like 10-100x the number of persons alive today. I think either of these versions of “near-termism” would reduce the consequences of the above two concerns, while having the obvious drawback of excluding important potential value from the assessment of grants.

Similarly to the “animal-inclusive” vs. “human-centric” split, the “long-termist” vs. “near-termist” split is a crude simplification of many possible disagreements. However, in practice, I believe that most people either (a) accept the basic logic of the “long-termist” argument, or (b) reject its conclusions wholesale (often for ambiguous reasons that may be combining moral and methodological judgments in unclear ways) and could reasonably be classified as “near-termist” according to something like the definitions above.

The two views have radically different implications, and I think all four issues listed previously apply, in terms of reasons to consider something other than the default approach to allocation:

For these reasons, I think there is some appeal to handling long-termism vs. near-termism using something other than the “default approach” (such as the simple approach mentioned above).

Some additional notes on worldview choices

We consider each possible combination of stances on the above two choices to be a “worldview” potentially worthy of consideration. Specifically, we think it’s worth giving serious consideration to each of: (a) the animal-inclusive long-termist worldview; (b) the animal-inclusive near-termist worldview; (c) the human-centric long-termist worldview; (d) the human-centric near-termist worldview.

That said, I currently believe that (a) and (c) have sufficiently overlapping practical implications that they can likely be treated as almost the same: I believe that a single metric (impact on the odds of civilization reaching a highly enlightened, empowered, and robust state at some point in the future) serves both well.

In addition, there may be other worldview choices that raise similar issues to the two listed above, and similarly call for something other than the “default approach” to allocating capital. For example, we have considered whether something along the lines of sequence vs. cluster thinking might call for this treatment. At the moment, though, my best guess is that the main worldviews we are deciding between (and that raise the most serious issues with the default approach to allocation) are “long-termist,” “near-termist animal-inclusive,” and “near-termist human-centric.”

Some other criteria for capital allocation

Our default starting point for capital allocation is to do whatever maximizes “good accomplished per dollar” according to some common unit of “good accomplished.” The first complication to this approach is the set of “worldview choices” discussed above, which may call for dividing capital into “buckets” using different criteria. This section discusses another complication: there are certain types of giving we’d like to allocate capital to in order to realize certain practical and other benefits, even when they otherwise (considering only their direct effects) wouldn’t be optimal from a “good accomplished per dollar” perspective according to any of the worldviews discussed above.

Scientific research funding

We seek to have a strong scientific research funding program (focused for now on life sciences), which means:

We think the benefits of such a program are cross-cutting, and not confined to any one of the worldviews from the previous section:

These benefits are similar to those described in the capacity building and option value section of our previous post on worldview diversification.

In order to realize these benefits, I believe that we ought to allocate a significant amount of funding to scientific research (my current estimate is around $50 million per year, based on conversations with scientific advisors), with a reasonable degree of diversity in our portfolio (i.e., not all on one topic) and a substantial component directed at breakthrough fundamental science. (If we lacked any of these, I believe we would have much more trouble attracting top advisors and grantees and/or building the kind of general organizational knowledge we seek.)

Currently, we are supporting a significant amount of scientific research that is primarily aimed at reducing pandemic risk, while also hopefully qualifying as top-notch, cutting-edge, generically impressive scientific advancement. We have also made a substantial investment in Impossible Foods that is primarily aiming to improve animal welfare. However, because we seek a degree of diversity in the portfolio, we’re also pursuing a number of other goals, which we will lay out at another time.

Policy-oriented philanthropy

We seek to have a strong policy-oriented philanthropy program, which means:

We think the benefits of such a program mirror those discussed in the previous section, and are similarly cross-cutting.

In order to realize these benefits, I believe that we ought to allocate a significant amount of funding to policy-oriented philanthropy, with some degree of diversity in our portfolio (i.e., not all on one topic). In some causes, it may take ~$20 million per year to be the kind of “major player” who can attract top talent as staff and grantees; for some other causes, we can do significant work on a smaller budget.

At the moment, we have a substantial allocation to criminal justice reform. I believe this cause is currently very promising in terms of our practical goals. It has relatively near-term ambitions (some discussion of why this is important below), and Chloe (the Program Officer for this cause) has made notable progress on connecting with external donors (more on this in a future post). At this point, I am inclined to recommend continuing our current allocation to this work for at least the next several years, in order to give it a chance to have the sorts of impacts we’re hoping for (and thus contribute to some of our goals around self-evaluation and learning).

We have smaller allocations to a number of other policy-oriented causes, all of which we are reviewing and may either de-emphasize or increase our commitment to as we progress in our cause prioritization work.

Straightforward charity

Last year, I wrote:

I feel quite comfortable making big bets on unconventional work. But at this stage, given how uncertain I am about many key considerations, I would be uncomfortable if that were all we were doing … I generally believe in trying to be an ethical person by a wide variety of different ethical standards (not all of which are consequentialist). If I were giving away billions of dollars during my lifetime (the hypothetical I generally use to generate recommendations), I would feel that this goal would call for some significant giving to things on the more conventional side of the spectrum. “Significant” need not mean “exclusive” or anything close to it. But I wouldn’t feel that I was satisfying my desired level of personal morality if I were giving $0 (or a trivial amount) to known, outstanding opportunities to help the less fortunate, in order to save as much money as possible for more speculative projects relating to e.g. artificial intelligence.

I still feel this way, and my views on the matter have solidified to some degree. I now would frame this issue as a desire to allocate a significant (though not majority) amount of capital to “straightforward charity”: giving that is clearly and unambiguously driven by a desire to help the less fortunate in a serious, rational, reasonably optimized manner.

Note that this would’t necessarily happen simply due to having a “near-termist, human-centric” allocation. The near-termist and human-centric worldviews are to some extent driven by a suspicion of particular methodologies as justifications for “radicalism,” but both could ultimately be quite consistent with highly unconventional, difficult-to-explain-and-understand giving (in fact, it’s a distinct possibility that some global catastrophic risk reduction work could be justified solely by its impact according to the near-termist, human-centric worldview). It’s possible that optimizing for the worldviews discussed above, by itself, would imply only trivial allocations to straightforward charity, and if so, I’d want to ensure that we explicitly set aside some capital for straightforward charity.

I still haven’t come up with a highly satisfying articulation of why this allocation seems important, and what is lost if we don’t make it. However:

I think it’s likely that we will recommend allocating some capital to a “straightforward charity” bucket, which might be described as: “Assess grants by how many people they help and how much, according to reasonably straightforward reasoning and estimates that do not involve highly exotic or speculative claims, or high risk of self-deception.” (Note that this is not the same as prioritizing “high likelihood of success.”) GiveWell was largely created to do just this, and I see it as the current best source of grant recommendations for the “straightforward charity” bucket.

My interest in “straightforward charity” is threshold-based. The things I’m seeking to accomplish here can be fully accomplished as long as there is an allocation that feels “significant” (which means something like “demonstrates a serious, and costly, commitment to this type of giving”). Our current working figure is 10% of all available capital. Hence, if the rest of our process results in less than 10% of capital going to straightforward charity, we will likely recommend “topping up” the straightforward charity allocation.

In general, I feel that the ideal world would be full of people who focus the preponderance of their time, energy and resources on a relatively small number of bold, hits-based bets that go against established conventional wisdom and the status quo - while also aiming to “check boxes” for a number of other ethical desiderata, some of which ask for a (limited) degree of deference to established wisdom and the status quo. I’ve written about this view before here and here. I also generally am in favor of people going “easy on themselves” in the sense of doing things that make their lives considerably easier and more harmonious, even when these things have large costs according to their best-guess framework for estimating good accomplished (as long as the costs are, all together, reducing impact by <50% or so). Consistent with these intuitions, I feel that a <1% allocation to straightforward charity would be clearly too small, while a >50% allocation would be clearly too large if we see non-straightforward giving opportunities that seem likely to do far more good. Something in the range of 10% seems reasonable.

Causes with reasonable-length feedback loops

As noted above, one risk of too much focus on long-termist causes would be that most of our impact is effectively unobservable and/or only applicable in extremely low-probability cases. This would create a problem for our ability to continually learn and improve, a problem for our ability to build an informative track record, and problems on other fronts as discussed above.

Ensuring that we do a significant amount of “near-termist” work partially addresses this issue, but even when using “near-termist” criteria, many appealing causes involve time horizons of 10+-years. I think there is a case for ensuring that some of our work involves shorter feedback loops than that.

Currently, I am relatively happy with Open Philanthropy’s prospects for doing a reasonable amount of work with short (by philanthropic standards) feedback loops. Our work on farm animal welfare and criminal justice reform already seems to have had some impact (more), and seems poised to have more (if all goes well) in the next few years. So I’m not sure any special allocations for “shorter-term feedback loops” will be needed. But this is something we’ll be keeping our eye on as our allocations evolve.

Allocating capital to buckets and causes

Above, we’ve contrasted two approaches to capital allocation:

The simplest version of the “diversifying approach” would be to divide capital equally between buckets. However, we think a better version of the diversifying approach would also take into account:

The credence/weight we place on different worldviews relative to each other. Simply put, if one thinks the long-termist worldview is significantly more plausible/appealing than the near-termist worldview, one should allocate more capital to the long-termist bucket (and vice versa). One way of approaching this is to allocate funding in proportion to something like “the probability that one would endorse this worldview as correct if one went through an extensive reflective process like the one described here.” This is of course a major and subjective judgment call, and we intend to handle it accordingly. We also think it’s important to complete writeups that can help inform these judgments, such as a review of the literature on population ethics and an analysis of some possibilities for the number and size of future generations (relevant to the long-termist vs. near-termist choice), as well as our already-completed report on moral patienthood (relevant to the animal-inclusive vs. human-centric choice).

Differences in “total value at stake.” Imagine that one is allocating capital between Worldview A and Worldview B, and that one’s credences in the two worldviews are 80% and 20%, respectively - but if worldview B is correct, its giving opportunities are 1000x as good as the best giving opportunities if worldview A is correct. In this case, there would be an argument for allocating more capital to the buckets corresponding to worldview B, even though worldview B has lower credence, because it has more “total value at stake” in some sense.[10]

Put another way, one might want to increase the allocation to worldviews that would be effectively favored under the “default” (as opposed to “diversifying”) approach. We expect to make some degree of compromise between these two approaches: worldviews favored by the default approach will likely receive more capital, but not to a degree as extreme as the default approach alone would imply.

Deals and fairness agreements. We suggest above that the different worldviews might be thought of as different agents with fundamentally different and incommensurable goals, disagreeing about how to spend capital. This metaphor might suggest dividing capital evenly, or according to credence as stated immediately above. It also raises the possibility that such “agents” might make deals or agreements with each other for the sake of mutual benefit and/or fairness.

For example, agents representing (respectively) the long-termist and near-termist worldviews might make a deal along the following lines: “If the risk of permanent civilizational collapse (including for reasons of extinction) in the next 100 years seems to go above X%, then long-termist buckets get more funding than was originally allocated; if the risk of permanent civilizational collapse in the next 100 years seems to go below Y%, near-termist buckets get more funding than was originally allocated.” It is easy to imagine that there is some X and Y such that both parties would benefit, in expectation, from this deal, and would want to make it.

We can further imagine deals that might be made behind a “veil of ignorance” (discussed previously). That is, if we can think of some deal that might have been made while there was little information about e.g. which charitable causes would turn out to be important, neglected, and tractable, then we might “enforce” that deal in setting the allocation. For example, take the hypothetical deal between the long-termist and near-termist worldviews discussed above. We might imagine that this deal had been struck before we knew anything about the major global catastrophic risks that exist, and we can now use the knowledge about global catastrophic risks that we have to “enforce” the deal - in other words, if risks are larger than might reasonably have been expected before we looked into the matter at all, then allocate more to long-termist buckets, and if they are smaller allocate more to near-termist buckets. This would amount to what we term a “fairness agreement” between agents representing the different worldviews: honoring a deal they would have made at some earlier/less knowledgeable point.

Fairness agreements appeal to us as a way to allocate more capital to buckets that seem to have “especially good giving opportunities” in some sense. It seems intuitive that the long-termist view should get a larger allocation if e.g. tractable opportunities to reduce global catastrophic risks seem in some sense “surprisingly strong relative to what one would have expected,” and smaller if they seem “surprisingly weak” (some elaboration on this idea is below).

Methods for coming up with fairness agreements could end up making use of a number of other ideas that have been proposed for making allocations between different agents and/or different incommensurable goods, such as allocating according to minimax relative concession; allocating in order to maximize variance-normalized value; and allocating in a way that tries to account for (and balance out) the allocations of other philanthropists (for example, if we found two worldviews equally appealing but learned that 99% of the world’s philanthropy was effectively using one of them, this would seem to be an argument - which could have a “fairness agreement” flavor - for allocating resources disproportionately to the more “neglected” view). The “total value at stake” idea mentioned above could also be implemented as a form of fairness agreement. We feel quite unsettled in our current take on how best to practically identify deals and “fairness agreements”; we could imagine putting quite a bit more work and discussion into this question.

Practical considerations. We are likely to recommend making some allocations for various practical purposes - in particular, creating buckets for scientific research funding, policy-oriented philanthropy, and straightforward charity, as discussed above.

How will we incorporate all of these considerations? We’ve considered more than one approach to allocation, and we haven’t settled on a definite process yet. For now, a few notes on properties we expect our process to have:

Likely outputs

This section discusses some reasonably likely outputs from the above process. All of these could easily change dramatically, but the general outputs listed here seem likely enough to help give readers a picture of what assumptions we are tentatively allowing to affect our planning today.

Global catastrophic risks and other long-term-oriented causes

I see several reasons to expect that we will recommend a very large allocation to global catastrophic risks and other causes primarily aimed at raising the odds of good long-term outcomes for civilization:

Put differently, despite my own current skepticism about the population ethics that seems most conducive to long-termism,

Given such a situation, it seems reasonable to me to devote a very large part of our resources to this sort of giving.

I note that the case for long-termism has largely been brought to our attention via the effective altruism community, which has emphasized similar points in the past.[12] I think the case for this sort of giving is initially unintuitive relative to e.g. focusing on global health, but I think it’s quite strong, and that gives some illustration of the value of effective altruism itself as an intellectual framework and community.

I think it is reasonably likely that we will recommend allocating >50% of all available capital to giving directly aimed at improving the odds of favorable long-term outcomes for civilization. This could include:

Policy-oriented philanthropy and scientific research funding

As indicated above, we will likely want to ensure that we have substantial, and somewhat diversified, programs in policy-oriented philanthropy and scientific research funding, for a variety of practical reasons. I expect that we will recommend allocating at least $50 million per year to policy-oriented causes, and at least $50 million per year to scientific-research-oriented causes, for at least the next 5 or so years.

Many details remain to be worked out on this front. When possible, we’d like to accomplish the goals of these allocations while also accomplishing the goals of other worldviews; for example, we have funded scientific research that we feel is among the best giving opportunities we’ve found for biosecurity and pandemic preparedness, while also making a major contribution to the goals we have for our scientific research program. However, there is also some work that will likely not be strictly optimal (considering only the direct effects) from the point of view of any of the worldviews listed in this section. We choose these partly for reasons of inertia from previous decisions, preferences of specialist staff, etc. as well as an all-else-equal preference for reasonable-length feedback loops (though we will always be taking importance, neglectedness, and tractability strongly into account).

Straightforward charity

As discussed above, we will likely recommend allocating something like 10% of available capital to a “straightforward charity” worldview, which in turn will likely correspond (for the near future) to following GiveWell recommendations. The implications for this year’s allocation to GiveWell’s top charities are discussed below.

Other outputs

I expect to recommend a significant allocation to near-termist animal-inclusive causes, and I expect that this allocation would mostly go to farm animal welfare in the near to medium term.

Beyond the above, I’m quite unsure of how our allocation will end up.

However, knowing the above points gives us a reasonable amount to work with in planning for now. It looks like we will maintain (at least for the next few years), but not necessarily significantly expand, our work on criminal justice reform, farm animal welfare, and scientific research, while probably significantly expanding our work on global catastrophic risk reduction and related causes.

Smoothing and inertia

When working with cause-specific specialist staff, we’ve found it very helpful to establish relatively stable year-to-year budgets. This helps them plan; it also means that we don’t need to explicitly estimate the cost-effectiveness of every grant and compare it to our options in all other causes. The latter is relatively impractical when much of the knowledge about a grant lives with the specialist while much of the knowledge of other causes lives with others. In other words, rather than decide on each grant separately using information that would need to be integrated across multiple staff, we try to get an overall picture of how good the giving opportunities tend to be within a given focus area and then set a relatively stable budget, after which point we leave decisions about which grants to make mostly up to the specialist staff.

We’ve written before about a number of other benefits to committing to causes. In general, I believe that philanthropy (and even more so hits-based philanthropy) operates best on long time frames, and works best when the philanthropist can make use of relationships and knowledge built over the course of years.

For these reasons, the ultimate output of our framework is likely to incorporate aspects of conservatism, such as:

Funding aimed directly at better informing our allocation between buckets

Making informed and thoughtful decisions about capital allocation is very valuable to us, and we expect it to be an ongoing use of significant staff time over the coming years.

We’re also open to significant capital allocations aimed specifically at this goal (for example, funding research on the relative merits of the various worldviews and their implicit assumptions) if we see good opportunities to make them. Our best guess is that, by default, there will be a relatively small amount (in dollars) of such opportunities. It’s also possible that we could put significant time into helping support the growth of academic fields relevant to this topic, which could lead to more giving opportunities along these lines; I’m currently unsure about how worthwhile this would be, relative to other possible uses of the same organizational capacity.

2017 allocation to GiveWell top charities

For purposes of our 2017 year-end recommendation, we started from the assumption that 10% of total available capital will eventually go to a “straightforward charity” bucket that is reasonably likely to line up fairly well with GiveWell’s work and recommendations. (Note that some capital from other buckets could go to GiveWell recommendations as well, but since the “straightforward charity” bucket operates on a “threshold” basis as described above, this would not change the allocation unless the total from other worldviews exceeded 10%; it is possible that this will end up happening, but we aren’t currently planning around that.)

We further split this 10% into two buckets of 5% each:

The result of all this was a $75 million allocation to GiveWell’s top charities for 2017. As GiveWell stated, “the amount was based on discussions about how to allocate funding across time and across cause areas. It was not set based on the total size of top charities’ funding gaps or the projection of what others would give.”

No more unified benchmark

A notable outcome of the framework we’re working on is that we will no longer have a single “benchmark” for giving now vs. later, as we did in the past. Rather, grants will be compared to the “last dollar” spent within the same bucket. For example, we will generally make “long-termist” grants when they are better (by “long-termist” criteria) than the last “long-termist” dollar we’d otherwise spend.

We think this approach is a natural outcome of worldview diversification, and will make it far more tractable to start estimating “last dollar” values and making our benchmarks more systematic. It is part of a move from (a) Open Philanthropy making decisions grant-by-grant to (b) Open Philanthropy’s cross-cause staff recommending allocations at a high level, followed by its specialist staff deciding which grants to make within a given cause.

Our future plans for this work

This post has given a broad outline of the framework we are contemplating, and some reasonably likely outputs from this framework. But we have a lot of work left to do before we have a solid approach to cause prioritization.

Over the coming year, we hope to:

This work has proven quite complex, and we expect that it could take many years to reach reasonably detailed and solid expectations about our long-term giving trajectory and allocations. However, this is arguably the most important choice we are making as a philanthropist - how much we want to allocate to each cause in order to best accomplish the many and varied values we find important. We believe it is much easier to increase the budget for a given cause than to decrease it, and thus, we think it is worth significant effort to come to the best answer we can before ramping up our giving to near-peak levels. This will hopefully mean that the main thing we are deciding on is not what parts of our work to cut (though there may be some of this), but rather which parts of our work are to grow the most.


  1. Here “relevant” means “animals that we see significant opportunities to cost-effectively help.” ↩︎

  2. See this footnote. ↩︎

  3. I’ll note that I do not feel there is much of a place for a human-centric view based on (c) the empirical view that nonhuman animals are not “conscious” in a morally relevant way: I have become convinced that there is presently no evidence base that could reasonably justify high confidence in this view, and I think the “default approach” to budget allocation would be appropriate for handling one’s uncertainty on this topic. ↩︎

  4. Adjusted for the degree of improvement. ↩︎

  5. Adjusted for the degree of improvement. ↩︎

  6. Supporting this statement is outside the scope of this post. We’ve previously written about grants that we estimate can spare over 200 hens from cage confinement for each dollar spent; contrasting this estimate with GiveWell’s cost-effectiveness figures can give an idea of where we’re coming from, but we plan to make more detailed comparisons in the future. ↩︎

  7. As noted above, a >10% probability on the animal-inclusive view would lead chickens to be valued >0.1% as much as humans (using the “method A” approach I find most intuitive), which would likely imply a great deal of resources devoted to animal welfare relative to near-term human-focused causes. For now, we bracket the debate over a “long-termist” worldview that could make this distinction fairly moot, though we note it in a later section of this post. ↩︎

  8. Other assumptions could have similar consequences to (a). (a) has the relevant consequences because it implies that improving the expected long-term trajectory of civilization - including reducing the risk of extinction - could do enormous amounts of good. Other population ethics frameworks could have similar consequences if they hold that the size of the intrinsic moral difference between “humanity has an extremely good future (very large numbers of people, very excellent lives, filling a large fraction of available time and space)” and “humanity does not survive the coming centuries” is sufficiently greater than the intrinsic moral importance of nearer-term (next century or so) events. ↩︎

  9. And barring sufficiently radical societal transformations, i.e., one might exclude impact along the lines of “This intervention could affect a massive number of people if the population explodes massively beyond what is currently projected.” ↩︎

  10. Philosophical incommensurability is a challenge for making this sort of determination. We think it’s unclear where and to what extent philosophical incommensurability applies, which is one reason we are likely to end up with a solution between what the assumption of incommensurability applies and what the assumption of commensurabillity implies. ↩︎

  11. See Greaves and Ord 2017 for an argument along these lines. ↩︎

  12. Examples here, here, here. ↩︎


MaxDalton @ 2022-01-09T10:00 (+2)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I think that the core idea here (about a few different worldviews you could bet on) is frequently referenced and important. I'm not 100% sure I agree with this approach theoretically, but it seems to have happened practically and be working fine. The overall framing of the post is around OP's work, which maybe would make it seem a bit out-of-place in some sort of collection of articles. 

I think I'd be pro including this if you could do an excerpt that cut out some of the OP-specific context.

Evan R. Murphy @ 2021-10-19T00:13 (+1)
  • We may try to create something similar to what GiveWell uses for its cost-effectiveness analysis: a spreadsheet where different people can fill in their values for key parameters (such as relative credence in different worldviews, and which ones they think should benefit from various fairness agreements), with explanations and links to writeups with more detail and argumentation for each parameter, and basic analytics on the distribution of inputs (for example, what the median allocation is to each worldview, across all staff members).

 

This would be very helpful. I'm having trouble even finding a sheet that prioritizes causes using a static worldview, i.e. one that lists causes with scores for Importance, Neglectedness and Tractability/Solvability and has notes to explain.