My Cause Selection: Michael Dickens

By MichaelDickens @ 2015-09-15T23:29 (+36)

Cross-posted to my blog.

Last edited 2015-09-24.

In this essay, I provide my reasoning about the arguments for and against different causes and try to identify which one does the most good. I give some general considerations on cause selection and then lay out a list of causes followed by a list of organizations. I break up considerations on these causes and organizations into five categories: Size of Impact; Strength of Evidence; Tractability; Neglectedness/Room for More Funding; Learning Value. This roughly mirrors the traditional Importance; Tractability; Neglectedness criteria. I identify which cause areas look most promising. Then I examine a list of organizations working in these cause areas and narrow down to a few finalists. In the last section, I directly compare these finalists against each other and identify which organization looks strongest.

You can skip to Conclusions to see summaries of why I prioritize the finalists I chose, why I did not consider any of the other charities as finalists, and my decision about who to fund.

If you decide to change where you donate as a result of reading this, please tell me! You can send me a message on this site or email me: mdickens93 [at] gmail [dot] com.

TL;DR

I chose these three finalists:

Based on everything I considered, REG looks like the strongest charity because it produces a large donation multiplier and it directs donations to both MIRI and ACE (as well as other effective charities).

Contents

General Considerations

Purpose of This Document

To date, my thinking on cause prioritization has been insufficiently organized or rigorous. This is an attempt to lay out all the considerations in my head for and against different causes and organizations and get some clarity about who to support.

This is originally inspired by conversations with Buck Shlegeris about the importance of cause prioritization, which he makes a good case for here:

(Buck makes some non-obvious claims here but I agree with the main thesis that we should spend more effort on cause prioritization.)

EAs spend a tenth as much time discussing cause prioritization as they should. Cause prioritization is obviously incredibly important. If given perfect information you could know that you should be donating to [cause area 1] and you’re actually donating to [cause area 2], then you are doing probably at least an order of magnitude less good than you could be, and I’m only even granting you that much credit because donating to EA charities in [cause area 1] might raise the profile of EA and get more people to donate to [cause area 2] in the future.

If EAs were really interested in doing as much good as they could, then they would want to put their beliefs about cause prioritization under incredible scrutiny. I’m earning to give this year, and I plan to give about 25% of my income. If I could spend a month of my year full time researching cause prioritization, and I thought I was 80% likely to be right about my current cause area, and I thought that this had a 50% chance of changing my cause area from my current cause area to a better one if I were wrong about cause prioritization right now, then it would be worth it for me to do that. […]

If EAs wanted to help others, they would all maintain a written list of all the strongest arguments against their cause areas from other EAs, and they’d all have their list of rebuttals. Ideally, I’d be able to write a really good document on cause prioritization and sell it for $100, because it would save other EAs so much time figuring this out themselves.

What I Value

I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.

I have a few more specific beliefs which I believe follow from hedonistic utilitarianism, but a lot of people disagree with, so they are worth stating explicitly:

I am not perfectly confident that hedonistic utilitarianism is true–I have some normative uncertainty. At the same time, I do not know what it would mean for hedonistic utilitarianism to be false (I don’t see how suffering could not be inherently bad, and I don’t see how anything other than suffering could be inherently bad). I am open to arguments that it is false, but I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment,” and almost all arguments take this form.

Terminology

This document is not optimized to be easy to read for people who aren’t familiar with popular effective altruist causes and organizations and has a lot of jargon and abbreviations. That said, I want people to be able to understand what I’m talking about, so I’m happy to offer clarification on specific terms or concepts in the comments section.

Personal Bias

Although I try to be as cause-neutral as possible, I feel some emotions that push me in the direction of one cause or another. Throughout this document I try to acknowledge any such feelings. This opens me to criticism along the lines of, “Your arguments for this cause are strong but you are emotionally biased against it; you should consider it more carefully.”

My Writing Process

I wrote this document over time as I researched different causes and organizations. I generally speak about choosing a charity in the future tense, because when I wrote most of this, I had not yet chosen where to donate. While reading this, imagine you are exploring the ideas with me, moving through all the major considerations and reaching a decision near the end of the document.

I found the process of writing this extremely valuable. I quickly identified which fundamental questions I needed to answer, and I wrote separate essays to answer a couple of important fundamental questions. Writing this document clarified for me what issues I need to think about when choosing a cause, and I learned a lot about what different organizations are doing and the arguments for and against them. Writing down your mental models is helpful for clarifying them and examining them from a distance.

I spent about 100 hours producing this document and ultimately changed my mind about where to donate. Even making conservative assumptions about how much I will donate and how much better my choice is now than it would have been, this time spent was worth over $100/hour (and I suspect it’s probably worth more like $500/hour). That said, I found this time enjoyable and probably wouldn’t have put in nearly as much work if it hadn’t been fun. For anyone else who finds this sort of work fun, I strongly encourage you to do it and publish your results in detail.

Related Writings

I had a few ideas that emerged as a result of working on cause prioritization, and I wrote them as separate essays:

In Is Preventing Human Extinction Good?, I examine the likely effects of the long-term survival of humanity and consider whether they are good or bad. I come out in favor of preventing extinction being good in expectation, and more likely good than bad.

In On Values Spreading, I discuss the value of values spreading and come to the (weak) conclusion that preventing global catastrophic risks looks more important.

In Charities I Would Like to See, I propose a few ideas for potentially high-impact interventions.

Things I Still Don’t Understand

I still have a few areas of uncertainty. I took a position on these questions, but I have weak confidence about my position. I’d like to see more work in these areas and will continue to think about them.

Acknowledgments

I have many people to thank for helping me produce this document.

Thanks to Nick Beckstead, Daniel Dewey, Ruairi Donnelly, Eric Herboso, Victoria Krakovna, Howie Lempel, Toby Ord, Tobias Pulver, Joey Savoie, Carl Shulman, Nate Soares, Pablo Stafforini, Brian Tomasik, Emily Cutts Worthington, and Eliezer Yudkowsky for answering my questions about their work and discussing ideas with me.

Thanks to Jacy Anthis, Linda Dickens, Jeff Jordan, Kelsey Piper, Buck Shlegeris, and Claire Zabel for reviewing drafts and helping me develop my thoughts on cause selection.

If I inadvertently left out anyone else, then I apologize, and thanks to you, too.

Causes

Global Poverty

Among global poverty charities, the Against Malaria Foundation (AMF) probably has the strongest case that it saves lives effectively. There’s strong evidence that it helps humans in the short run, but I have some concerns about its larger effects. Does AMF (or other global poverty charities) negatively impact wild animals? Does making humans better off hurt the far future? My best guess on both these questions is “no,” but I have a lot of uncertainty about them, so the case for AMF is not as clear-cut as it first appears. That said, if every potentially high-impact but more speculative cause that I consider has insufficient evidence that it’s effective, I may donate to AMF. I consider this something of a fallback position: AMF is the strongest charity unless another charity can show that it’s better.

I do not discuss global poverty charities in depth here because I do not believe I have much to add to GiveWell’s extensive analysis.

Factory Farming

Some charities such as The Humane League work to prevent animals from suffering on factory farms. There’s a plausible case that some such charities do much more good than GiveWell top charities (perhaps by an order of magnitude or more), although the supporting evidence here is much weaker.

Similarly to global poverty, reducing factory farming may be net harmful in the short run. Reducing factory farming might reduce speciesism and spread good values in the long term, but this claim is highly speculative. Then it’s not a question of comparing speculative far-future causes versus proven factory farming: the case for the long-term benefits of reducing factory farming is on no firmer ground than the case for global catastrophic risk charities. I discuss values spreading as a separate cause below.

Charities against factory farming do not serve as a fallback position in the same way that GiveWell top charities do, because the evidence in their favor is a lot weaker. The state of this evidence is improving, and funding studies on animal advocacy could be highly effective; see my discussion of Animal Charity Evaluators.

Far Future (General)

Almost all utility lives in the far future. Thus, it’s likely that the most effective interventions are ones that positively affect the far future. But this line of reasoning has a major problem: it’s not at all obvious how to positively affect the far future. Some, such as Jeff Kaufman, believe this is sufficient reason to focus on short-term interventions instead.

Short-term interventions such as direct cash transfers will always have stronger evidence in their favor than far future interventions. But the far future is so overwhelmingly important that I believe our best bet is to support far future causes whenever we can find charities with reasonably good indicators of their effectiveness (e.g. success at achieving short-term goals or competent leadership). It’s conceivable that we won’t be able to find any sufficiently reliable charities (and this was my impression when I first investigated the issue a few years ago), but it’s worth trying.

I used to prefer short-term interventions with clear supporting evidence–I supported GiveWell top charities and, later, ACE top charities and ACE itself. But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it. This is not to say that we should donate to whatever charity can give a naive argument that it has the highest expected value. When I discuss specific far-future charities below, I look for indicators on whether their activities are effective. I would not give to a far-future charity unless it had compelling evidence that its activities would be impactful. Obviously this evidence will not be as strong as the evidence in favor of GiveWell top charities, but there still exist far-future charities with better and worse evidence of impact.

We should be cautious about being lenient with a cause area’s strength of evidence. Jeff Kaufman explains:

People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we’re buying things for others instead of for ourselves. If I buy something and it’s no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it’s no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you’re not really in a position to have your concerns taken seriously.

[…]

[With AI risk, the] problem is we really really don’t know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there’s no way for us to find out. Everything will still seem like it’s going well.

AI risk and other speculative causes don’t have good feedback loops, but we don’t know nothing about whether we’re succeeding. And there’s reason to believe we should support speculative causes. As Nick Beckstead writes:

My overall impression is that the average impact of people doing the most promising unproven activities contributed to a large share of the innovations and scientific breakthroughs that have made the world so much better than it was hundreds of years ago, despite the fact that they were a small share of all human activity.

The best interventions are probably those that significantly affect the far future, although probably many (or even most) far-future interventions do nothing useful. We should try to improve the far future, but be careful about naive claims of high cost-effectiveness and look for indicators that far-future charities are competent.

Values Spreading to Improve the Far Future

Some people propose focusing on spreading values now to increase the probability that the far future has beneficial results. I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.

In Vegan Advocacy and Pessimism about Wild Animal Welfare, Carl Shulman points out that vegan advocacy could be bad for animals in the short-run (although Brian Tomasik believes it has positive short-run effects), so the main benefit comes from values spreading; but the benefits of values spreading are unclear.

I discuss this subject in more depth in On Values Spreading. I conclude that there are good arguments that values spreading is the most effective activity, but it has some serious considerations against it and global catastrophic risk reduction looks more important.

Global Catastrophic Risk Reduction

I include existential risks as a type of global catastrophic risk (GCR). Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.

It appears to be the case that either (1) working on GCR reduction in general is the best thing to do, in which case there may be multiple different cause areas within GCR that are more effective than any non-GCR causes; or (2) working on GCR is not the best thing to do, in which case all cause areas within GCR are similarly ineffective. (In case (1), lots of GCR interventions may still be ineffective, for example, research on preventing alien hamster invasions.)

Is preventing human extinction good?

This section getting so in-depth that I moved it into a separate article. In summary: there are a few reasons why the impact of humanity on the far future could be negative; but overall, it looks like humanity’s impact has a positive expected value (and will probably be positive), so it’s highly valuable to ensure that human civilization continues to exist.

Size of Impact

Preventing global catastrophic risk is a sufficiently important problem that fairly small efforts in the right direction can have much larger long-term effects than GiveWell-recommended charities (see Beckstead’s dissertation “On the Overwhelming Importance of Shaping the Far Future”). Successful GCR interventions probably have a bigger positive impact than anything else except possibly ensuring that the beings controlling the far future have good values (although I believe GCR reduction is probably more important, for reasons discussed above).

Strength of Evidence

Previously, I had major concerns about whether any GCR interventions had any effect. However, with Open Phil’s recent research into GCRs, I am more confident that there will emerge opportunities with sufficiently strong evidence of effectiveness that they make good giving targets. Open Phil has high standards of rigor, and I trust it to recommend interventions that have strong arguments in their favor.

Due to the haste consideration, I want to seriously consider donating to GCR interventions this year or next year. The evidence for the effectiveness of GCR organizations is uniformly much weaker than the evidence for GiveWell top charities, but this does not rule them out as contenders for the best cause area. Their overwhelming importance means I am willing to be more lenient about their strength of evidence than I would be for proximate interventions.

Neglectedness

GCR reduction as a cause is highly neglected right now, but more large donors are showing an interest in the topic, so it’s plausible that it will receive more funding in the future. Even so, funding it now may provide more information for future donors and help the field grow more quickly. Additionally, there’s the haste consideration: we don’t know when a major global catastrophe will occur or how long it will take to prepare for, so we should begin preparing as early as possible. Something like unfriendly AI is probably at least two decades away, but it will probably take more than two decades to develop solid theory around friendly AI.

Tractability

It’s not obvious what counts as evidence that a GCR intervention is working. I discuss this specifically with regard to the individual organizations that I consider.

AI Safety

Preventing unfriendly AI might successfully avert human extinction, which would have an extremely large impact. Furthermore, building a friendly AI is plausibly more important than any other GCR if it enables us to produce astronomically good outcomes that we would not be able to produce otherwise.

Friendly AI and Non-Human Animals

Given that non-human animals may dominate the expected value of the far future, it’s important that an AI gives them appropriate consideration. I discuss this issue in a few places in this document. Here I have a couple of additional quick points:

GCRs Other than AI Safety

I do not know of any good ways to fund organizations putting work into individual GCRs other than AI safety. I agree with Open Phil’s assessment that biosecurity and geoengineering are highly promising (although I have only briefly investigated these areas, so much of my confidence comes from Open Phil’s position and not from my own research). I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs. I expect that there’s perhaps some geoengineering research worth funding, but I don’t have the expertise to identify it and I don’t know how to find geoengineering research that’s funding-constrained.

FLI has published a list of organizations working on biotechnology, although if I tried to read through this list and find ones worth supporting, I would do a poor job; if someone with some domain knowledge looked through these to find ones potentially worth donating to, that could be highly valuable. I believe I understand AI safety well enough to roughly assess organizations in the field, but this is not the case with any other GCR.

If I did come to the conclusion that some specific GCR other than AI safety was the most important, I should probably try to use Open Phil’s research to learn more. I discuss this potential giving opportunity in “Open Phil-Recommended GCR Charities” below. I would also encourage other EAs, especially those with some knowledge about some relevant field such as biosecurity, to explore the available options and publicly write about any good giving opportunities they find.

If a sufficiently strong giving opportunity arises in a field of GCR reduction other than AI safety, I will seriously consider it; but at present I don’t see any.

Movement Building

Effective charities that work to reduce GCRs may be many times better than global poverty charities, so organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities. If I thought global poverty were the best cause, then meta-organizations that attempt to grow the donation base may be even better. But if GCR reduction is vastly more important, then movement-building charities produce most of their value from a small set of donors who support GCR reduction.

It’s possible that there exist movement-building organizations that produce a sufficiently large benefit to outweigh donations to effective GCR charities. I discuss this possibility when looking at individual movement-building organizations below. But in the general case, I expect that donating directly to the best object-level charity will have a higher impact than donating to movement building organizations.

There are a few additional concerns with supporting movement building; Peter Hurford discusses the most important ones here.

Meta-Research

Meta-research (which mostly means charity evaluation, although it also could include things like Charity Science’s research on fundraising strategies) potentially has a lot of value if it successfully discovers new interventions with bigger impact or room for more funding than any current interventions that are popular among EAs. It’s hard to predict when this will actually happen, and depends on to what extent you believe EAs have identified the best interventions already. But I’m generally optimistic about efforts to produce new knowledge.

Organizations

Here I briefly discuss the major considerations for and against every organization I have seriously considered. The organizations are grouped roughly by category and otherwise listed in no particular order.

I do not discuss a number of potentially promising organizations because surface signs show that they are unlikely to be the most effective charity, and I couldn’t find good enough information about them to feel confident about donating to them.

Machine Intelligence Research Institute (MIRI)

Emotional disclosure: I feel vaguely uncomfortable about MIRI. Originally I was bothered by Eliezer’s lack of concern for animals and worried that he would make decisions to benefit humans at the expense of other conscious minds; MIRI’s new director Nate Soares does seem to give appropriate value to non-human animals, so this is less of a concern. I also was bothered by how hard it was to tell if it was doing anything good. Today, it is more transparent and produces more tangible results. This second concern may still be significant, but it is over-weighted in my emotional response to MIRI. I still have an intuitive reaction that AI research isn’t as good as actually helping people or animals, but I try to ignore this because I don’t believe it’s rational.

The evidence for MIRI’s effectiveness is considerably weaker than for GiveWell top charities. I have some concerns, but I see a number of reasons to expect that MIRI is succeeding at achieving its short-term goals, which gives me confidence in its organizational competence. It doesn’t have some of the same problems I see with FLI (which I discuss in the separate section on FLI), which makes me prefer MIRI over FLI.

Strength of Evidence

MIRI is trying to improve outcomes in the future, so it’s not clear what qualifies as evidence that MIRI is currently doing a good job. We can’t get direct evidence without predicting the future, so here are a few things I look for:

  1. Its researchers and leadership appear competent and devoted to the problem.
  2. It has high research output and its research is well-regarded by others in the field.
  3. It successfully convinces other AI researchers that alignment is an important problem.
  4. Respected AI researchers endorse MIRI as effective.
  5. It is transparent and makes an effort to publicly disclose its activities, accomplishments, and failures.
  6. Its researchers care about non-human animals.

Competence

Based on personal conversations with MIRI researchers and reading their public writings, I get the impression that they have a strong grasp on which sub-problems in AI safety are important and how to make progress on them. I have only had fairly limited personal interactions with MIRI researchers; the most extensive interaction I had was when I attended a MIRIx workshop where we discussed their paper “Robust Cooperation in the Prisoner’s Dilemma”. The problem this paper attempts to solve has clear relevance to AI safety–we would like superintelligent agents to cooperate with us and with each other on real-world prisoner’s dilemmas–and the paper makes obvious steps toward solving this problem while also outlining what remains unsolved.

More broadly, the items listed on MIRI’s technical agenda look like important and urgent problems. At the very least, MIRI appears competent at identifying significant research problems that it needs to solve. My impression is that MIRI is doing a better job than anyone else at identifying the important problems, although this is difficult to justify explicitly.

We have to consider how competent MIRI is compared to other researchers we could fund: perhaps if some other people were working on the sorts of problems that MIRI works on, they would solve them much more quickly and efficiently. I find this somewhat unlikely. I have read a lot of writings by Eliezer Yudkowsky, Luke Muehlhauser, and Nate Soares (Eliezer is the founder and senior researcher, Luke is the ex-director, and Nate is current director), and they strike me as intelligent people with strong analytic skills and a good grasp of the AI alignment problem. I briefly looked through the FLI grantees, and MIRI’s research plan seems more obviously important for AI safety than many of the grantees.

Published research

Although MIRI published little before 2014, it has started publishing more papers since then. I haven’t engaged much with its research papers but a cursory examination shows that they are probably relevant and valuable. It looks like most of MIRI’s papers are purely self-published, but a few have been accepted to respected conferences (including AAAI-15), although I don’t know how high a bar this is. This is another of MIRI’s weak points–there’s not clear evidence that other AI researchers respect its publications. MIRI papers are rarely cited by anyone other than MIRI itself and I would feel more confident about MIRI if it received more citations. This not a strong negative signal because AI safety is such a small field, but it’s certainly not a positive signal either.

Influence

Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about. Alyssa Vance lists a few prominent academics who are familiar or involved with MIRI’s work. I would like to see this trend continue–AI safety remains a small field, and few AI researchers work on safety full-time.

I don’t know much about this, but my understanding is that in the past year or so, FLI has done a good deal more than MIRI to generate academic interest in AI safety; but MIRI had done more in previous years, and FLI probably wouldn’t exist (or at least wouldn’t be concerned about AI) if it weren’t for MIRI. This suggests that MIRI has done a reasonably good job in the past of raising concern for AI safety, which is a good sign for MIRI’s competence. It certainly could have been much more successful–MIRI has existed for over a decade and AI safety has only recently begun gaining momentum. The idea of AI safety sounds prima facie absurd, so I’d expect it to be hard to convince people that it matters, but perhaps someone other than MIRI could still have done a better job raising concern. (Today FLI seems to be doing a better job, although this may largely come from the fact that MIRI is focusing less on advocacy and more on research.)

Endorsements

Stuart Russell has publicly endorsed the importance of AI safety work and serves as a research advisor to MIRI. The advisory board consists of professors and AI researchers. I don’t know what sort of relationship the advisors have with MIRI or to what extent serving as an advisor acts as an implicit endorsement of MIRI’s competence.

From what I have seen, MIRI is fairly lacking in endorsements from respected AI researchers. I do not know how likely it would be to get endorsements if it were doing valuable work, so I don’t know how concerning this is, but it certainly counts as evidence against MIRI’s effectiveness.

Nate has claimed that when he discusses the problems MIRI is working on with AI researchers, they agree that the problems are important:

I talk to industry folks fairly regularly about what they’re working on and about what we’re working on. Over and over, the reaction I get to our work is something along the lines of “Ah, yes, those are very important questions. We aren’t working on those, but it does seem like we’re missing some useful tools there. Let us know if you find some answers.”

Or, just as often, the response we get is some version of “Well, yes, that tool would be awesome, but getting it sounds impossible,” or “Wait, why do you think we actually need that tool?” Regardless, the conversations I’ve had tend to end the same way for all three groups: “That would be a useful tool if you can develop it; we aren’t working on that; let us know if you find some answers.”

Given that Nate is obviously motivated to believe that AI researchers value the work he’s doing, he could be cherry-picking or misinterpreting people’s claims here (I doubt he would do this deliberately but he may do it subconsciously or accidentally). It’s also possible that people exaggerate how important they believe his research is for the sake of politeness. He does not provide any specific quotes or name any researchers who endorse MIRI’s work as important, so I do not consider his claims here to be strong evidence.

Transparency

MIRI makes some effort to make itself more transparent:

As far as I know, it was not doing any of these things three years ago, so this shows promise.

Even better, it has a detailed guide to what technical problems MIRI is researching and a technical agenda explaining why it works on the problems it does. These materials were published relatively recently, so MIRI is increasing transparency.

Concern for Animals

Strictly speaking, this doesn’t have anything to do with MIRI’s skill at AI safety work, but one of my major concerns with friendly AI research is that it could lead to the development of an AI that benefits humans at the expense of non-human animals. In a separate essay, I come to the conclusion that GCR reduction is probably valuable even considering its impact on non-human animals. Even so, I feel better about people doing AI safety research if they care about animals and are therefore motivated to do research that will not end up harming animals.

I am somewhat more optimistic because Nate Soares, the current director of MIRI, appears to place high value on non-human animals; I have spoken with him about this issue, and he agrees it would be bad if an AI did not respect the interests of non-human animals and that it’s a genuine concern. I briefly investigated the positions of most of the other full-time MIRI employees. From what I can glean from public information, it looks like Rob Bensinger places adequate value on non-human animals and has a good understanding of why it’s silly to not be vegetarian. Patrick LaVictoire apparently cares about animals, and Katja Grace talks as though she cares about animals but I find her arguments against vegetarianism concerning2 (Rob Bensinger has counterarguments). Eliezer Yudkowsky doesn’t believe animals are morally relevant at all. I don’t know about the rest of MIRI’s staff.

Room for More Funding

Based on MIRI’s fundraising goals and current funds raised, I expect that it has substantial room for more funding. It has laid out a fairly coherent plan for how it could use additional funds, and Nate believes it could effectively use up to $6 million. Although I am less confident about its ability to usefully deploy an additional $6 million than an additional, say, $1 million, it is unlikely to raise that much in the near future; I expect it to continue to have a substantial funding gap.

AI safety is attracting considerably more attention: Elon Musk has donated $10 million, and other donors or grantmakers may put more money into the field. This is still fairly uncertain, and I don’t want to count on it happening; plus, I expect MIRI to have a better idea of which problems matter than most AI researchers or grantmakers (MIRI researchers have been working full-time on AI safety for a while), so funding MIRI probably matters more than funding AI safety research in general.

I’m concerned that FLI did not make larger grants to MIRI; this reflects negatively on MIRI’s potential room for funding. I suspect FLI is being too conservative about making grants, but they have more information than I do, so it’s hard to say. This is one of my primary concerns with MIRI. I’ve tried to find out more information about FLI’s decision, here but their grantmaking process involved confidential information so there’s a limit to what I can learn.

Future for Humanity Institute (FHI) and Centre for the Study of Existential Risk (CSER)

Both of these organizations are potentially high value, but representatives of both organizations have claimed that they are not currently funding constrained.

Neil Bowerman from FHI:

I would argue that FHI is not currently funding constrained….We could of course still use money productively to hire a communications/events person, more researchers and to extend our runway, however at present I would suggest that funding x-risk-oriented movement building, for example through Kerry Vaughan and Daniel Dewey’s new projects, is a better use of funds than donating to FHI for EA-aligned funding. source

Sean O hEigeartaigh from CSER:

We’re not funding constrained in the large at the moment, having had success in several grants. We have good funding for postdoc positions and workshops for our initial projects. Most of our funding has some funder constraints, and so we may need small scale funding over the coming months for ‘centre’ costs that fall between the cracks, depending on what our current funders agree for their funds to cover – one example is an academic project manager position to aid my work. source

Both of these people posted comments on a Facebook thread after Eliezer said these organizations were funding-constrained. Apparently a good way to find information about an organization is to make public, incorrect claims about it.

Edited 2015-09-21 to add: The fact that these organizations claim they don’t have room for more funding makes me more confident that they’re optimizing for actually reducing existential risk rather than optimizing for personal success. If one of them does become substantially funding-constrained in the near future, I consider it fairly likely that it will be the best giving opportunity.

Future of Life Institute

FLI organized the “Future of AI” conference on AI safety and funded AI research projects that cover a somewhat broader range than MIRI’s research does. It has future plans to expand into biosecurity work but at the time of this writing it has not gotten beyond the early stages.

Size of Impact

I expect the median FLI grant to be less effective than the same amount of money given to MIRI, but due to its breadth it may hit upon a small number of extremely effective grants that end up making a large difference. That said, the broader approach of FLI looks more reasonable to fund for someone who doesn’t have strong confidence that MIRI is effective at reducing AI risk.

Some of FLI’s AI grants are probably highly effective. However, I find some of them concerning. Some of the research projects attempt to make progress on inferring human values. If the inferred human values are harmful (more specifically, they do not assign sufficient value to non-human animals or other sorts of non-human minds), the AI could produce very bad outcomes such as filling the universe with wild-animal suffering. I think this is more likely not to happen than to happen, but it’s a substantial concern, and it’s an argument in favor of spreading good values to ensure that if AI researchers create a superintelligent AI, they give it good values.

I do not have the same concern with MIRI: I have spoken to Nate Soares about this issue, and he agrees that encoding human values (as they currently exist) in an AI would be a bad idea, in part because it might give insufficient weight to non-human animals.

Room for More Funding

FLI recently received $10 million from Elon Musk and an additional $1 million from the Open Philanthropy Project. From Open Phil’s writeup:

After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded.

It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals. This makes me believe that FLI has no room for more funding. Even if FLI had wanted to fund more grants, I don’t believe I could actually allow them to do so.

Suppose FLI has $X and would like to have $(X+A+B+C). Open Phil believes FLI should have $(X+A+B). If I do nothing, Open Phil will give $(A+B) to FLI. If I give $A to FLI, Open Phil will give $B, so either way FLI ends up with $(X+A+B). I cannot give money to FLI after Open Phil does be causes FLI will have finished making grants by then. I believe this model approximately describes the situation during the previous round of grantmaking and probably describes future rounds; so my donations only serve to reduce the amount of money that Open Phil gives to FLI.

Open Phil-Recommended GCR Charities

At the time of this writing, Open Phil has not produced any recommendations on GCR interventions that small donors can viably support, and probably won’t for a while. In fact, it’s not even clear that it has any plans to do so. I looked through Open Phil’s published materials and could not find anything on this.

(Edited 2015-09-16 to clarify.)

But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charity that I could identify on my own.

Size of Impact

GCRs have possibly the largest impact of any cause area; the case for this has been made before and does not need to be repeated here. Presumably, Open Phil recommendations will have as large an impact as any other organizations working in the GCR space, although it’s fairly likely that Open Phil will not find any organizations that have a higher impact than organizations like MIRI or FHI that are well known in EA circles.

Waiting for Open Phil means losing out on any value that could be generated between now and then, including both direct effects and learning value. The haste consideration weighs heavily in favor of supporting organizations now rather than waiting for Open Phil.

Strength of Evidence

I expect strength of evidence to be the main benefit of Open Phil-recommended organizations over current organizations. Although Open Phil focuses on more speculative causes than GiveWell classic, it still does extensive research into cause areas, and I would expect it to recommend specific interventions if it has strong reason to believe that they are effective. Right now, the organizations working on GCR reduction have only weak evidence of impact, and Open Phil will likely change this.

Room for More Funding

Although Open Phil-recommended GCR organizations may be the best giving opportunities in the world, I have major concerns about their neglectedness. Right now Good Ventures has more money than it knows how to move, and it could fill the room for more funding for all of Open Phil’s recommendations on GCR reduction. If I donate to GCR, it may only displace donations by Good Ventures. I see this as a major argument against waiting for Open Phil recommendations. It’s possible that Open Phil will find massively scalable opportunities in this space, but it does not seem likely that it will find anything so scalable that it can absorb any funds Good Ventures directs at it and still have more room for more funding.

GiveWell/Open Philanthropy Project

(Here I use GiveWell to refer to both classic GiveWell and Open Phil.)

Size of Impact

(Edited 2015-09-16 to expand on my reasoning.)

I consider it likely that Open Phil’s work on GCRs will find interventions that are more effective than anything EAs are currently doing. But it seems rather unlikely that their other current focus areas (except possibly factory farming) will produce anything as effective. Over the next 5-10 years the existing institutions working on GCR reduction may run out of room for more funding as the EA movement grows and/or GCR reduction efforts attract more interest, in which case Open Phil-type work of seeking out new interventions would be especially valuable; but I don’t think we’re there yet. It’s also unclear to what extent GiveWell can use additional funds from small donors to produce recommendations more quickly.

If I believe that GCR interventions are much more effective in expectation than most other sorts of interventions (which I do), then Open Phil’s effectiveness gets diluted whenever it works on anything other than GCR reduction. I understand that Open Phil/Good Ventures want to fund a broader range of interventions, and that may make sense for someone with as much money as Good Ventures; but if I believe they are leaving funding gaps in GCR interventions then I can probably have a bigger impact by funding those interventions directly rather than by supporting Open Phil.

Strength of Evidence

GiveWell appears to apply much more rigor and clear thinking to charity analysis than anyone else. I trust its judgment more than my own in many cases. I am concerned that it does not place sufficient attention on sentient beings other than humans. Open Phil recently committed $5 million to factory farming, which I find promising but ultimately much too limited. Good Ventures recently committed to a $25 million grant to GiveDirectly; it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money. If GiveWell as an organization shared my values about the importance of animals, I might be more likely to support it, but their current spending patterns make me reluctant.

Room for More Funding

(Edited 2015-09-16 to clarify.)

Good Ventures currently pays for a large portion of GiveWell’s operating expenses, and GW has no apparent need for funding. It wants to maintain independence from Good Ventures by keeping other sources of funding, but I do not find this consideration very important. If Good Ventures stops backing them and GiveWell finds itself in need of funding, I will reconsider donating.

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Animal Charity Evaluators (ACE)

Emotional disclosure: I feel a strong positive affect surrounding ACE, so I may end up overrating its value. The thought of giving money to ACE makes me feel like I’m making a big difference on an emotional level, and this feeling probably biases me in ACE’s favor.

Size of Impact

ACE has three plausible routes to impact that I can see.

  1. It could discover new effective interventions.
  2. It could produce stronger evidence for known interventions and thus persuade more donors to direct money there.
  3. It could produce strong evidence that known interventions are ineffective and thus direct money elsewhere.

On #1: I expect there are highly effective methods of helping animals that are not yet tractable or even well-understood, such as environmental interventions to reduce wild-animal suffering. ACE cares about wild-animal suffering and would likely do research on under-examined and potentially high-impact topics such as this if it had a lot more funding. It’s unlikely that my funding would push ACE over the edge to where it decides to invest more in research of this sort; but it cannot do this research unless it has more funding, and it cannot get more funding unless people like me provide it. ACE is also small enough that if I requested that it do more research in some area, it would probably be willing to entertain the possibility.

On #2: I have met several people who do not donate to animal charities solely because they think the evidence for them is too weak. If ACE produced higher-quality research supporting animal charities, this would almost certainly persuade some people to donate to them; but I don’t know how much money would be directed this way.

On #3: If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities. This is a much smaller impact than the plausible impact of effective animal charities. Certainly if current interventions are ineffective then we want to know, but discovering this is much less valuable than discovering that some factory farming intervention is definitively 10x more impactful than a GW top charity. Changing impact from 0x to 1x is much less important than changing from 1x to 10x.

Edited 2015-09-16 to add: ACE may find that some types of interventions are ineffective while others are effective and thus direct funding to the more effective interventions. This would be about as valuable as #2, and possibly more effective because this sort of evidence might be able to move more money. Thanks to Carl Shulman for raising this possibility.

It is unclear how to reason about the probability of #1 and #3 or the effect size of #1 and #2, so I do not know much about the expected impact of each. Number 1 certainly has the highest upside and makes me the most optimistic about the value of donations to ACE. I would like to see rigorous work on interventions to help animals on a massive scale (such as wild animals or animals in the far future). Right now, we’re nowhere close to being able to produce this sort of work, but the best way I can see to push us in that direction is to support ACE.

As I explain in “Is Preventing Human Extinction Good?”, I see good reason to be optimistic about the long-term impact of humanity on all sentient life. In the words of Carl Sagan, “If we do not destroy ourselves, we will one day venture to the stars” (and biologically engineer animals to be happy). Thus it looks like ensuring humanity continues to survive is more important than reducing wild-animal suffering in the medium-term future.

It’s possible that spreading concern for wild animals will have a massive effect on the far future; but it’s not at all clear that ACE research will ever have this effect, even if it does research on reducing wild animal suffering. I discuss my general concerns with values spreading in “On Values Spreading”. Even so, I believe ACE has a decent chance of being the most effective charity. It’s not too unlikely that if ACE had substantially more funding, it would find an intervention or interventions that are more effective than anything that currently receives funding. This makes supporting ACE look like a promising option.

Strength of Evidence

ACE does not have as strong a reputation as GiveWell, although it is a much newer and smaller organization so this is to be expected. The interactions I have had with employees and volunteers at ACE have left me with a strong positive impression of their competence and concern for the problems they are attempting to solve. Their research results have not been nearly as in-depth as GiveWell’s, but ACE acknowledges this. This is largely a product of the lack of studies that have been done on animal advocacy. ACE is making some efforts to improve the state of research, and these efforts look promising. I spoke to Eric Herboso3 about this, and he had clearly put some thought into how ACE can improve the state of research.

Room for More Funding

ACE appears to have strong ability to absorb more funding. Right now it has a budget of only about $150,000 a year–not nearly enough to do the sort of large randomized controlled trials that it wants. I expect ACE could expand its budget several-fold without having much diminishing marginal effectiveness. Additionally, if it expanded, it could broaden its scope, putting more effort into researching wild animal suffering or other speculative but potentially high-impact causes.

Learning Value

ACE does research and publicly publishes its results, so I believe donations to ACE have particularly high learning value. Peter Hurford has argued “when you’re in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.” I expect donations to ACE to produce more valuable knowledge than donations almost anywhere else, which makes me optimistic about the value of donations to ACE. In particular, I expect ACE to produce substantially more valuable research per dollar spent than GiveWell.

Animal Ethics (AE) and Foundational Research Institute (FRI)

Both these organizations do high-level research and values spreading for fairly unconventional but important values like concern for wild animals. I wouldn’t be surprised if one of these turned out to be the best place to donate, but I don’t know much about their activities or room for more funding and I’ve had difficulty finding information. The only thing I can see them publicly doing is publishing essays. While I find these essays valuable to read, I don’t have a good picture of how much good this actually does.

A note to these organizations: if you were more transparent about how you use donor funds, I would more seriously consider donating.

Giving What We Can (GWWC)

I’m skeptical about the value of creating new EAs because the 2014 EA survey showed that the average donation size was rather small. However, Giving What We Can members are probably substantially better than generic self-identified EAs because GWWC carefully tracks members’ donations. I can’t find any more recent data, but from 2013 it looks like members have a fairly strong track record of keeping the pledge.

At present, only a tiny fraction of GWWC members’ donations go toward GCR-reduction or animal-focused organizations, which may be much higher value than global poverty charities. Based on GWWC’s public data, it has directed $92,000 to far-future charities so far (and apparently $0 to animal charities, which I find surprising). If we extrapolate from GWWC’s (speculative) expected future donations, current members will direct about $287,000 to far-future charities. That’s less than GWWC’s total costs of $443,000, but the additional donations to global poverty charities may make up for this. But I’m skeptical if GWWC will have as large a future impact as it expects to have (a 60:1 fundraising ratio seems implausibly high), and it’s not clear how many of its donations would have happened anyway. I know a number of people who signed the GWWC pledge but would have donated just as much if they hadn’t. (I don’t know how common this is in general.) Additionally, I don’t see a clear picture of how donations to GWWC translates into new members. GWWC might raise more money than Charity Science or Raising for Effective Giving (both discussed below), but I have a lot more uncertainty about it which makes me more skeptical.

These various factors make me inclined to believe that directly supporting GCR reduction or high-learning-value organizations will have greater impact that supporting GWWC.

Charity Science

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent) through a variety of fundraising strategies. It has helped individuals run Christmas fundraisers and created a Shop for Charity browser extension that allows you to donate 5% of your Amazon purchases at no cost to you. It has plans to explore other methods of fundraising such as applying the REG model to other niches and convincing people to put charities in their wills.

Size of Impact

Right now Charity Science focuses on raising money for GiveWell top charities. Its fundraising model looks promising–it tries a lot of different fundraising methods, so I think it’s likely to find effective ones–but I expect that the best charities are substantially higher-impact than GiveWell top charities, so this leads me to believe that donations to Charity Science are not as impactful as donations to highly effective far-future-oriented charities. I spoke with Joey Savoie, and he has considered doing research on effective interventions to help non-human animals. This is promising, and I may donate to Charity Science in the future if it ever focuses on this, but for now its activities look less valuable than ACE or REG (see below).

Edited 2015-09-16 to add: Carl Shulman points out that Charity Science’s 9:1 fundraising ratio substantially undervalues the opportunity cost of staff time, so the effective fundraising ratio is less than this. This looks like a bigger problem for Charity Science than for the other fundraising charities I consider.

Room for More Funding

Based on Charity Science’s August 2015 monthly report, it looks like it could use new funding to scale up and broaden its activities. It has enough ideas about activities to pursue that I believe it could deploy substantially more funds without experiencing much diminishing marginal utility.

Learning Value

Donations to Charity Science will likely have high value in terms of learning how to effectively raise funds. I’m uncertain about how valuable this is; I feel more confident about the value of learning about object-level interventions and I’m somewhat wary of movement growth as a cause, largely for reasons Peter Hurford discusses here.

Raising for Effective Giving (REG)

Size of Impact

In 2014, REG had a fundraising ratio of 10:1, about the same as Charity Science’s. I am somewhat more optimistic about the value of REG’s fundraising than Charity Science’s because REG has successfully raised money for far future and animal charities in addition to GiveWell recommendations. For details, see REG’s quarterly transparency reports. In the conclusion, I look at REG’s fundraising in more detail (including how much it raises for far future and animal charities) to try to assess how much value it has.

Strength of Evidence

Edited 2015-09-21.

The case for REG’s effectiveness appears pretty straightforward: it has successfully persuaded lots of poker players to donate money to good causes. Along with other movement-building charities, REG faces a concern about counterfactuals: how many of REG-attributed donations would have happened anyway? I believe this is a serious concern for Giving What We Can–many people who signed the pledge would have donated the same amount anyway (I’m in this category, as are many of my friends).

REG’s case here looks much better than the other EA movement-building charities I’ve considered. REG focuses its outreach on poker players who were previously uninvolved in EA for the most part. Even if they were going to donate substantial sums prior to joining REG, they almost certainly would have given to much less effective charities.

Room for More Funding

REG is small and has considerable room to expand. They have specific ideas about things they would like to do but can’t because they don’t have enough money. I expect REG could effectively make use of an additional $100,000 per year and perhaps considerably more than that. This is not a lot of room for more funding (GiveWell moves millions of dollars per year to each of its top charities), but it’s enough that I expect REG could effectively use donations from me and probably from anyone else who might decide to donate to them as a result of reading this.

REG receives funding through the Effective Altruism Foundation (EAF), but you can donate through REG’s donations page and the funds are earmarked for REG you can donate to EAF (formerly known as GBS Switzerland) and earmark your donations for REG.

Learning Value

Added 2015-09-17.

REG looks less exploratory than Charity Science so it probably has worse learning value, but it’s still pursuing an unusual fundraising model with a lot of potential to expand (especially into other niches). REG appears to have fairly strong learning value, and I want to see what sorts of results it can produce moving forward.

Other Organizations

I know of a handful of other organizations that might be highly effective, but I don’t have much to say about. For these, I don’t have a strong sense of whether what they do is valuable, and they look sufficiently unlikely to be the best charity that I didn’t think they were worth investigating further at this time. I have included a brief note about why I’m not further investigating each charity.

Conclusions

I have selected three finalist charities that are all plausibly the best, but they are in substantially different fields and therefore difficult to compare.

Brief explanations for charities I’m not supporting

Here I list all the charities I considered but are not finalists and briefly explain why I have chosen not to support them.

Finalist Comparison

I have narrowed the list of considered charities to three finalists:

Here I give the advantages of each of them over the others.

In Favor of MIRI over ACE

In Favor of ACE over MIRI

In Favor of REG: Weighted Donation Multiplier

Edited 2015-09-17.

To get an idea of the value of REG’s fundraising, I looked at the charities for which they have raised money and assigned weightings to them based on how much impact I expect they have. I created two different sets of weightings: one where I assume AI safety is the most impactful intervention (with MIRI as the most effective charity) and one where I assume animal welfare/values spreading is highest leverage (with ACE as the most effective charity). The AI model reflects my current best guesses, but I created the animal model to see what sorts of results I would get.

This table shows how much money REG raised in each category over its four quarters of existence to date (in thousands of dollars), taken from its lovely transparency reports:

Name 2014Q3 2014Q4 2015Q1 2015Q2 Total
GBS 18 126 4 30 178
ACE 0 25 0 0 25
animal (veg) 0 100 0 0 100
speculative 0 25 0 20 45
MIRI 0 0 0 53 53
Other 20 93 53 50 216
Total 38 369 57 153 617

I used these fundraising numbers and assumed REG’s expenses through 2015Q2 are $100,000, extrapolating from 2014’s expenses of $52,318.

For my two models I used the following weights:

Category AI-Model Weight Animal-Model Weight
GBS 0.2 0.2
ACE 0.5 1
veg advocacy 0.2 0.3
speculative 0.3 0.5
MIRI 1 0.2
Other 0.1 0.1

For GBS, I conservatively assume that all money directed toward GBS goes to activities other than REG (and I give these activities a weight of 0.2). Accounting for GBS funding going back to REG involves some complications so to be conservative I ignore any compounding effects that occur this way.

It’s not unlikely that categories in this model vary much more in effectiveness than the weights I have listed here. I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

In the AI/MIRI model, I found that $10 of REG expenditures produced about $16 of weighted donations; in the ACE/animal model, every $10 spent produced $15 of weighted donations. This means that $10 to REG produced about $16 in equivalent donations to MIRI in the first model, and $15 in equivalent donations to ACE in the second model.4

When we weight the charities that REG has produced donations for, its fundraising ratio drops from 10:1 to a much more modest 1.5:1. Donating to REG instead of directly to an object-level charity produces an additional level of complexity, which means my money has more opportunities to fail to do good. A 1.5:1 fundraising ratio is probably high enough to outweigh my uncertainty about REG’s impact, but not by a wide margin.

But there’s another argument working in REG’s favor. I have considerable uncertainty about whether it’s more important to support values spreading-type interventions like what ACE or Animal Ethics does, or to support GCR reduction like MIRI. GCR reduction looks a little more important, but it’s a tough question. The fact that REG produces a greater-than-one multiplier using both a MIRI-dominated model and an ACE-dominated model means that if I donate to REG, I produce a positive multiplier either way. If I choose to donate to either MIRI or ACE, I could get it wrong; but if I donate to REG, in some sense I’m guaranteed to “get it right” because donations to REG probably produce greater than $1 in both MIRI-equivalent and ACE-equivalent donations.

I don’t want to put too much value on this fundraising ratio because there are various reasons why it could be off by a lot. It appears to show that REG fundraising is valuable even if you discount most of the charities it raises money for, which was my main intention. This alone is not sufficient to demonstrate REG’s effectiveness to my mind, but its leadership looks competent and its model has reasonably strong learning value.

A caveat: just because REG has raised a lot of funds for MIRI and animal charities in the past doesn’t mean it will continue to do so. But it raised these funds from a number of different people and over multiple quarters, so this is good reason to believe that it will continue to find donors interested in supporting MIRI and ACE/animal charities. Additionally, Ruairi Donnelly, REG’s Executive Director, has said to me in private communication that REG is meeting with more donors who want to fund far-future oriented work and that he hopes REG will move more money to these causes in the future.

There’s a concern about whether REG will continue to raise as much money per dollar spent as it has in the past. I expect REG to experience diminishing returns, although it is a new and very small organization so returns should not diminish much in the near future. I don’t have a strong sense for the size of the market of poker players who might be interested in donating to effective causes. It looks considerably bigger than REG’s current capacity so REG has some room to scale up, but I don’t know how long this will continue to be true. If REG’s fundraising ratio dropped to 5:1 and it didn’t increase funding to far-future charities, I would probably not donate to it; but it seems unlikely that it will drop that much in the near future.

Decision

Edited 2015-09-24. I had previously written that I didn’t know if I was going to donate this year.

Based on all these considerations, it looks like Raising for Effective Giving is the best charity to fund. My main concern here is falling into a meta trap. One possible solution here is to split donations 50/50 between meta- and object-level organizations. If I were to do this, I would give 50% to REG and 50% to MIRI. But I believe the EA movement could afford to be more meta-focused right now, so I feel comfortable giving 100% of my donations to REG.

I plan on directing my entire donation budget this year to REG. I will make the donation by the end of October unless I am persuaded otherwise by then. I am continuing to seek out reasons to change my mind and I’m open to criticism and to arguments that I should give somewhere else.

How to Donate

Added 2015-09-21.

You can donate to REG through the GBS Switzerland website (English, Swiss). If you live in the United States, you can make your donation tax-deductible by giving to GiveWell and asking it to forward the money to REG.

Where I’m Most Likely to Change My Mind

Added 2015-09-21.

I’ve had conversations with people who believe each of these, and while I’m unpersuaded right now, I find their positions plausible.

Notes

  1. REG’s fundraising ratio is less than 1:1 for both MIRI and ACE, but I still consider it more valuable than direct donations to either MIRI or ACE individually. I explain why in the section on Raising for Effective Giving and in the conclusion.

  2. How to assess whether a person gives adequate concern to non-human animals could be the subject of an entire additional essay, but I don’t have a clear enough picture of how to do this to write well on the subject. My general impression is that people who claim to care about animals but have some justification for non-vegetarianism probably don’t actually care as much about animals as they say they do. They sometimes claim that the time and effort spent not eating animal products could be better spent donating to efficient charity (or something), but then don’t make trivial but hugely beneficial choices such as eating cow meat instead of chicken meat. I’m somewhat more convinced by people who eat animals but donate a lot of money to charities like The Humane League; I understand that vegetarianism is harder for some people than others, but actions signal beliefs more strongly than words do.

  3. Eric Herboso used to work at ACE as the Director of Communications; he’s currently earning to give while volunteering for ACE part time.

  4. It’s probably a coincidence that both models ended up with about the same weighted fundraising ratio. MIRI received about as much funding as ACE plus speculative non-human-focused charities, so these balance out in the two models.


CarlShulman @ 2015-09-17T02:30 (+11)

It's good that you are sharing the research effort you put into this so that others can critique it, use/reference it, and build on it.

I have assorted comments below with quotes they are responding to.

But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it.

Some context for my view: while I think there is a strong case for this in terms of additive total utilitarianism, I think the dominance is far weaker when one takes value pluralism and normative uncertainty into account. GiveWell staff have sometimes talked about whether decisions would be recommended if one valued the entire future of civilization at a 'mere' 5 or 10 times the absolute value of a century of the world as it is today. Value pluralism is one reason to apply such a heuristic.

Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.

The argument there being that for most risks GCR versions are much more likely than direct existential risk versions, and GCRs have some chance of knock-on existential harms. Note that AI risk was excepted there, and has been noted as unusual in having a closer link between GCR and existential risk than others.

organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities

New staff members and entrepreneurs are very important in many cases. E.g. the EA movement has supplied a lot of GiveWell/OpenPhil staff, and founders for things like Charity Science and ACE which you mention.

Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about.

Some of this is definitely the recent surge of progress in AI, e.g. former AAAI president Eric Horvitz mentions that this was important for him and others.

For MIRI's causal influence some key elements I would highlight are:

It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals.

ETA: OpenPhilanthropy has now just put up a detailed summary of the reasoning behind the FLI grant which may be helpful. They also talk about why they have raised their priority for work on AI in this post.

This is an issue that will recur on any area where OpenPhil/GiveWell is active (which will shortly include factory farming with the new hire and grant program). Here are two of my posts discussing the issues, (the first has important comments from Holden Karnofsky about their efforts to manage 'fungibility problems)'.

One quote from a GiveWell piece:

If you have access to other giving opportunities that you understand well, have a great deal of context on and have high confidence in — whether these consist of supporting an established organization or helping a newer one get off the ground — it may make more sense to take advantage of your unusual position and "fund what others won't," since GiveWell's research is available to (and influences) large numbers of people.

Also, you likely won't have zero effect, but would likely shift the budget constraint, so you could think of your donation as expanding all of Good Ventures' grants roughly in proportion to their size, which will be diversified and heavy on GiveDirectly. Or at least you could do that if they all had similar diminishing returns curves. If some have flatter curves (perhaps GiveDirectly) in Good Ventures' calculus then marginal funds would go disproportionately to those.

But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charities.

That's a surprising claim. Probably it would recommend an existing charity. Maybe what you mean is that your expected value for any given GCR charity given what you know now is less than your expectation would be for the charities OpenPhil will recommend, given knowledge of those recommendations?

Or maybe you mean that OpenPhil's recommendations are likely to be charities that exist but that you currently don't know of?

My comment was too long to fit in the 1000 word limit, so the remainder is below.

CarlShulman @ 2015-09-17T02:30 (+9)

My comment was too long, so here's the rest:

it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money

Is this something like unweighted QALYs per dollar? If you are analyzing in terms of long-run effects on the animal population, as elsewhere in the piece, those QALYs are a red herring. E.g. a tiny increase in economic activity very faintly expediting economic growth will overwhelm the direct QALYs involved with future populations. From the long-run point of view things like the changes in economic output, human populations, carbon emissions, human attitudes about other animals, and such would be the relevant metrics and don't scale with QALYs (this is made blatantly clear if one consider things like flies and ants). From the tiny-animal focus (with no accounting for differences in nervous system scale), the large farm animals will be neglible compared to various effects on tiny wild animals. If one considers neural processes within and across animals, then the numbers will be far less extreme.

Now, as I said at the start of this comment, normative pluralism and such would suggest not allowing complete dominance of long-run QALYs over current ones, but comparisons in terms of QALYs here don't track the purported long-run impacts, and if one focused only on unweighted animal QALYs without worrying about long-run consequences it would lead one away from farm animals towards wild animals.

Good Ventures currently pays for GiveWell’s operating expenses,

Not true. Previously GiveWell had capped Good Ventures contributions at 20% of GiveWell's budget. Recently they changed it to 20% for GiveWell's top charities work, and 50% for the Open Philanthropy Project (reasoning that Good Ventures is the main customer of the latter at this time, so it is reasonable for it to bear a larger share).

I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.

Well nothing is going to force Good Ventures to hand over billions of dollars if it disagrees with the OpenPhil recommendations (and last year there was some disagreement between GW and GV about allocations to the different global poverty charities). But this does seem like a serious consideration to support outside donation to OpenPhil, and I think you may be underrating this donation option.

If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities.

You only consider the case where it finds that all the current popular animal interventions are very poor. If many or most but not all are, then it could support productive reallocation from the ones that don't work as well to the ones that work better, potentially multiplying effectiveness severalfold. That's in fact the usual justification given by people in the animal charity community for doing this kind of research, but doesn't appear at all here. So I think the whole discussion of #3 has gone awry. Also the 'several orders of magnitude' claim appears again here, and the issues with QALYs vs metrics that better track long-run changes (e.g. attitude changes, population changes, legal changes) recur.

Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent)

Although note that that is valuing staff time at below minimum wage. If you valued it at closer to opportunity cost (or salaries at other orgs) the ratio would be far lower. I still think Charity Science is promising and deserving of support because of the knowledge it has produced, and I suspect its fundraising ratios will improve, but at the moment the ratio of EA resources put in to fundraising success is still on the lower end. See this discussion on the EA facebook group.

I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.

Shouldn't the same caveat apply to your suggestions earlier in the post about the future being 1000+ times more important than present beings?

undefined @ 2015-09-17T21:15 (+7)

These comments are copied from some of the original ones I made when reviewing Michael's post. My views are my own, not GiveWell's.

I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.

I think the case for values spreading is quite a bit better. Reducing global catastrophic risks is pretty bimodel. Either the catastrophe happens, or it doesn't. You can try to measure the risk being reduced, sometimes, but doing so isn't straightforward, obvious, or something we have experience in.

We have lots of experience tracking value change. We can see it happen in incremental parts in the near-future. You don't need special tools or access to confidential information to do a decent poll on values changing.

The strongest objection to this, I think, is that values changing in the short term won't necessarily affect the long-term trajectory of our values, or at least not in a predictable way. In contrast, preventing an x-risk in the short term at least allows for the possibility of doing stuff in the far future (and it seems plausible that GCRs might also change far-future trajectory).

Another consideration is that values may become vastly more or less mutable if we develop technology that allows of certain types of self-modification, or an AI that enforces values that are programmed into it. Depending on how you believe this might happen, you might believe spreading good values before those technologies develop is vastly more or less important, exactly because then the likelihood of those values affecting the far future increases.

I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs.

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

undefined @ 2015-09-17T21:20 (+4)

I think a lot of GCRs could be more tractable than AI risk (possibly by a large margin) if someone went through the work of identifying more opportunities to fund risk reduction for those GCRs, then made it available to small donors.

This is definitely an important point. I think that if someone did identify opportunities like this, that's one of the most likely reasons why I might change where I donate. Right now it doesn't look like any GCR is substantially more important/tractable/neglected than AI risk (biosecurity is probably a bigger risk but not by a huge margin, geoengineering might be more tractable but not for small donors), but this could change in the future.

undefined @ 2015-09-18T02:14 (+6)

Thanks for writing this, Michael. More people should write up documents like these. I've been thinking of doing something similar, but haven't found the time yet.

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Re: ACE's recommended charities. I know you know I think this, but I think it's better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn't currently as strong as I'd like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.

Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that "competence" is relative to what you're trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I've read of his writing, I expect he'll do very well in his new role as an analyst for GiveWell. But there's a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.

Nate Soares seems as smart as you'd expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don't see why you'd think it likely.

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

undefined @ 2015-09-21T01:57 (+8)

I realized reading this that I haven't thought much about REG. It sounds like they do good things, but I'm a bit skeptical re: their ability to make good use of the marginal donation they get. I don't think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they're a good giving opportunity on the margin? (I'm thinking out loud here, don't mean this paragraph to be a criticism.)

Thanks for bringing this up, Topher!

As Michael said, there are various things we would do if we had more funding.

1) REG’s ongoing operations need to be funded. Currently, we have around 6 months of reserves (at the current level of expenses), but ideally we would like to have 12 months. This would enable us to make use of more (sometimes unexpected) opportunities and to try things because we wouldn’t have to constantly be focused on our own funding situation.

2) We could potentially achieve (much) better results with REG by having additional people working on it. The best illustration of this is probably one person that we met (by going to poker stops) with a strong PR & marketing background who’s been working in the poker industry for 10 years now (there are not that many people with a level of expertise and network about the poker world like this person). This person woud like to work with us, but we had to decline her for the moment, even though we think that it would (clearly) be worth it to hire her. Another thing we would like to do is hiring someone to organise more charity tournaments and establish partnerships with industry leading organisations or strengthen existing ones, improve member communications and do social media. There are already several candidates who could do this, but we are hesitant to make this investment since we lack the appropriate funding.

3) Another way we would use additional funds is by working on various REG “extensions”. We are about to set up two REG expansions, but we won’t have enough resources to make the most out of even these two – and there are many more potentially really promising REG expansions that could be done. (The first of the two REG expansions that is likely going to be spread among the respective community in a few days is “DFS Charity”, a REG for Daily Fantasy Sports, an industry that is currently growing substantially and with a fair share of people with a similar (quantitative) mindset as poker players have. The preliminary website can be found at dfscharity.org – please don't share it widely yet.)

I hope this helped!

undefined @ 2015-09-20T21:12 (+2)

In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they're billing themselves as a research institute, I think they've set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they've got much less of a track record to go on.

To put this in context, the emerging consensus is that publicly advocating for x risk reduction in the area of AI is counterproductive, and it is better to network with researchers directly, something that may be best done by performing relevant research.

undefined @ 2015-09-16T21:47 (+4)

Thanks so much for writing this. I agree with your arguments and I find your conclusion fairly persuasive.

undefined @ 2015-09-28T15:32 (+3)

Global Priorities Project: insufficiently transparent about activities

I wanted to say thank you for this. There's always a tradeoff between reporting on what you're doing and getting on and doing more stuff, but this was a good reminder to look at whether we're getting the balance right, and I think we're going to devote a bit more effort to transparency.

undefined @ 2015-09-18T19:03 (+3)

Great post! Just a quick clarification, I definitely think AR research is worth doing but it would be better under a different organization/brand/startup . I think its valuable to keep an organization fairly focused on doing a few things well, and AR research is definitely not in the CS scope.

undefined @ 2015-09-22T05:46 (+2)

Really quick question: I was wondering why the 1.5:1 ratio is enough to outweigh your uncertainty about REG's impact?

undefined @ 2015-09-22T05:54 (+3)

Well that's certainly a concern. I'm made more confident by the fact that REG directs funding to multiple charities that are good candidates for top charity, and I believe their model has reasonably good learning value. Plus 1.5:1 is sufficiently higher than 1:1 that I believe it's more likely to have a positive multiplicative effect from outside view.

undefined @ 2015-09-22T06:02 (+2)

I'm not sure I understand. I would think that in the face of uncertainty it would be better to divide donations in accordance to how likely we find each model.

undefined @ 2015-09-22T05:49 (+2)

Surely that depends on the level of uncertainty?

WilliamKiely @ 2020-07-26T23:34 (+1)

I read this post today after first reading a significant portion of it on ~December 2nd, 2019. I'm not sure my main takeaways are from reading it, but wanted to comment to say that it's the best example I currently am aware of someone explaining their cause prioritization reasoning when deciding where to donate. Can anyone point me to more or better examples of people explaining their cause prioritization reasoning?

WilliamKiely @ 2020-07-26T23:34 (+1)

Some other related links I found helpful:

Vipul Naik's "My 2018 donations": https://forum.effectivealtruism.org/posts/dznyZNkAQMNq6HtXf/my-2018-donations

Adam Gleave's "2017 Donor Lotter Report": https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report

Brian Tomasik's "My Donation Recommendations": https://reducing-suffering.org/donation-recommendations/

https://forum.effectivealtruism.org/posts/Z6FoocxsPfQdyNX3P/where-some-people-donated-in-2017

undefined @ 2015-11-10T04:48 (+1)

I plan on directing my entire donation budget this year to REG. I will make the donation by the end of October unless I am persuaded otherwise by then.

What was your final decision on this?

undefined @ 2015-11-10T16:02 (+3)

I made the donation to REG about a week ago.

undefined @ 2015-09-23T14:20 (+1)

Another reason to like REG: I expect bringing more poker players in to the EA movement will be good for our culture if poker is effective rationality training. (But I still think a profession where people are paid to make accurate predictions, say successful stock pickers, could be even better.)

undefined @ 2015-09-16T15:14 (+1)

Providing such an in depth writeup is really useful, thanks. At the risk of derailing into an academic philosophy discussion, here are some clarificatory questions about what you value (which I'm particularly interested in because I think your values are relatively common among EAs):

I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.

Why do you think that these are the only things of value?

Pleasurable and painful experiences in non-humans have moral value. Non-humans includes non-human animals, computer simulations of sentient beings, artificial biological beings, and anything else that can experience pleasure and suffering.

Leaving aside (presumably hypothetical) computer simulations and artificial biological beings, do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would? If not, roughly how much worse or less bad would you guess they are? (I'm talking about a similar equivalence to that described in this Facebook poll, but focusing purely on morally relevant attributes of experiences.)

The best possible outcome would be to use fill the universe with beings that experience as much joy as possible for their entire lives.

Can you give an example of the ideal form of joy? Would an intense, simple experience of physical pleasure be a decent candidate? (Picking an example of such an experience could be left as an exercise for the reader.)

I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment”

What's the most unintuitive result that you're prepared to accept, and which gives you most pause?

undefined @ 2015-09-16T16:13 (+4)

The great thing about nested comments is derailments are easy to isolate. :)

Why do you think that these are the only things of value?

I don't understand what it would mean for anything other than positive and negative experiences to have value. I believe that when people say they inherently value art (or something along those lines), the reason they say this is because the thought of art existing makes them happy and the thought of art not existing makes them unhappy, and it's the happy or unhappy feelings that have actual value, not the existence of art itself. If people thought art existed but it actually didn't, that would be just as good as if art existed. Of course, when I say that you might react negatively to the idea of art not existing even if people don't know it exists; but now you know that it doesn't exist so you still experience the negative feelings associated with art not existing. If you didn't experience those feelings, it wouldn't matter.

do you think non-humans like chickens and fish have equally bad experiences in a month in a factory farm as a human would?

I expect there's a high probability (maybe 50%) that factory farms are just as bad for chickens as they are for humans, and a somewhat lower probability (maybe 25%) that they are just as bad for fish. I expect it's more likely that factory farms are worse for humans than that they're worse for chickens/fish, so in expectation, they're worse for humans, but not much worse.

I don't know how consciousness works, although I believe it's fundamentally an empirical question. My best guess is that certain types of mental structures produce heightened consciousness in a way that gives a being greater moral value, but that most of the additional neurons that humans have do not contribute at all to heightened consciousness. For example, humans have tons of brain space devoted to facial recognition, but I don't expect that we can feel greater levels of pleasure or pain as a result of having this brain space.

Can you give an example of the ideal form of joy?

The best I can do is introspect about what types of pleasure I enjoy most and how I'm willing to trade them off against each other. I expect that the happiest possible being can be much happier than any animal; I also expect that it's possible in principle to make interpersonal utility comparisons, so we could know what a super-happy being looks like. We're still a long way away from being able to do this in practice.

What's the most unintuitive result that you're prepared to accept, and which gives you most pause?

There are a lot of results that used to make me feel uncomfortable, but I didn't consider this good evidence that utilitarianism is false. They don't make me uncomfortable anymore because I've gotten used to them. Whichever result gives me the most pause is one that I haven't heard of before, so I haven't gotten used to it. I predict that the next time I hear a novel thought experiment where utilitarianism leads to some unintuitive conclusion, it will make me feel uncomfortable but I won't change my mind because I don't consider discomfort to be good evidence. Our intuitions are often wrong about how the physical world works, so why should we expect them to always be right about how the moral world works?

At some point we have to use intuition to make moral decisions--I have a strong intuition that nothing matters other than happiness or suffering, and I apply this. But anti-utilitarian thought experiments usually prey on some identifiable cognitive bias. For example, the repugnant conclusion takes advantage of people's scope insensitivity and inability to aggregate value across separate individuals.