How to get a new cause into EA
By Joey 🔸 @ 2018-01-10T06:41 (+46)
Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time. I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement. Simultaneously, I do not really see people presenting new cause areas in the way I think would be the most likely for many EAs to consider and take seriously. I wanted to make a quick reference post as a better way for people to propose new cause/intervention areas they might see as promising. Much of this advice could also be used to present a new specific charity within a known high impact cause area.
1) Be ready to put in some real time
2) Have a specific intervention within the broad cause to compare
3) Compare it to the most relevantly comparable top cause.
4) Compare it numerically
5) Focus on one change at a time
6) Use equal rigour
7) Have a summary at the top
1) Be ready to put in some time
Comparing different cause areas is hard and takes a good amount of research time. In the effective altruism movement there are many people and organizations who work full time comparing interventions within a single cause, and generally it's much harder to compare interventions across cause areas. Generally it's going to take some time to effectively articulate a new cause area, particularly if the EA movement has not spent much collective time considering it. It's not expected, or even possible, that one person does all the research required in a whole cause area, but if you think a cause area is competitive, you likely will have to be the first one to do some of the initial research and start to build a case for why others should consider it. To start to get enough reasoning for EAs to really consider a cause it has to start to stand out among the hundreds of other causes that could be high impact to work on.
2) Have a specific intervention within the broad cause to compare
As mentioned above, comparing whole cause areas is hard. In many ways it's also not the point. If cause area A is more effective than global poverty on average but all the specific interventions in it can not compete with the best global poverty charities (e.g., AMF), it will still not be a great target to put resources towards. Additionally, it's much harder to get into the details and comparisons of a whole cause area which will often contain numerous different interventions. The best way around these concerns I have seen is to drill down on an example of a highly promising intervention. For example, if you are making the case that mental health in the third world is a high impact cause area, look deeply into a specific example, like CBT cell phone applications. With a more specific intervention it will be easy to fact check as well as numerically compare it to the other top interventions EAs currently support.
3) Compare it to the most relevantly comparable top cause.
A huge number of causes that are brought up are not directly compared to the most relevant comparable cause area. If someone is making a case for positive psychology and mental health, the natural comparison is to the GiveWell top charities. If it's about wild animal suffering, it needs to be compared to farm animal interventions, and if it's about bio-risk, it could be compared to AI. If someone is sold on far future and pitching a new cause area within it, making generalized arguments about the far future being better than AMF is not going to do much work convincing people. Most EAs will have already heard AMF vs. AI comparison and those sorts of arguments will not be new or persuasive to AMF supporters and do nothing to compare the cause to its real competition, AI. Some cause areas might be amenable to multiple comparisons (bio risk could be made as a far future case compared to AI or a direct DALYs improved compared to AMF), but in any case, try to compare it to the cause that contains the sorts of people who are most likely to find your new proposed cause high impact.
4) Compare it numerically
Effective altruists are a quantitative bunch and numerical comparisons are basically necessary for seriously comparing the good done in different cause areas and interventions. There are a lot of different ways to do this, but a safe bet would be a cost-effectiveness analysis in a spreadsheet or guesstimate model. As mentioned above, depending on the most relevant cause you are comparing to, you will want to generally model things in that context. That would generally mean DALYs or cost per life saved for global poverty, animal DALYs for animals, or percent chance of affecting long term society for far future. Cross-comparing metrics is a useful blog post in and of itself, but it’s not going to be best presented while simultaneously presenting a new cause area. This leads well into my next point.
5) Focus on one change at a time
Often when people present new cause areas they come with a lot of other proposed changes. They could be ethical (e.g. we should have X view on population ethics), epistemic (e.g. we should value historical evidence more) or logistical (e.g. we should use this CEA software even though it’s harder to read for beginners). As mentioned above, all of these might be worthwhile changes for the EA movement to make, but if it’s conflated with a suggested cause, generally I have seen people dismiss the cause because of the other associated claims with it. For example, “Only negative leaning utilitarians think cause X is important, and I am not negative leaning.” This often happens with causes that have a very strong case even with fairly traditional EA standards of evidence/ethics etc. If the cause area as a whole relies on an ethical or other assumption to be competitive, I would generally recommend writing about that specifically before pitching a cause or intervention that is reliant on it.
6) Use equal rigour
Not only does a new cause need to be compared -- it ideally needs to be compared with equal rigour, at least as much as is possible. It's easy to point out flaws in one charity or cause area and only highlight the benefits of another, but without comparing them with the same level of rigour, the numbers will be useless next to each other. To use a clear example of this, I have seen bus ads that claim to save a life for $1 and yet I still donate to GW charities which claim to save a life for $3000. This is mainly because the way the calculation was done was completely different, even if they were both put into a dollar per life saved metric at the end. I expect that if the $1 charity was compared using GiveWell’s methodology, its cost-effectiveness would rapidly decrease. Likewise, if a cause area is presented with very optimistic estimates, it’s hard to take the endline conclusion seriously -- much like the bus ad.
This is an easy one to say but very hard to do in practice. The best way I have found is to try to think, “How would GiveWell (or ACE, etc) model this?”, and try to follow those principles. Another great way is to get an EA or two who you respect and is not sold on your cause area to take a look over your numbers and suggest changes. People will comment, suggesting changes on almost any model, but if it's too far off a realistic number, many people just will not bother with commenting on all the things that need changes. Lastly, another thing to keep in mind is that often logistical costs are easy to forget. X product may only cost $1,000 and save a life of DALYs, but what about shipping costs, staff overhead, government permissions, etc? Underestimating these often significant costs are a common reason why CEAs get worse as people investigate deeper.
7) Have a summary at the top of a more in depth review
Particularly for long posts, having a summary at the top with the strongest arguments and endline conclusions will make it a lot easier for people to know if they should commit to reading the whole post or not, as well as allow engagement from people who do not have time to dig into all the details of the full post.
Why bother pitching a new cause within EA?
Following all these steps is a lot of work and that energy and time could be being put into furthering the cause directly or earning money and donating to it. Despite this, I think in almost all cases it is worth presenting a new cause area to EA if it's possible it could be competitive. The EA community directs large portions of money both directly through earning to give and indirectly from influencing large foundations. Historically, very underfunded causes like AI x-risk and farm animal rights have both massively benefited from EA financial support. In addition, the EA movement directs talent towards high impact cause areas, new charities are founded, Ivy League graduates apply for jobs and volunteer research is done in areas that are seen as high impact. Even if a well written cause report takes 20 hours or more to do the benefits can be much larger if even a small percentage of the EA community is convinced the cause area is worthwhile.
undefined @ 2018-01-10T12:04 (+15)
This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.
Jacy @ 2018-01-13T15:14 (+17)
I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)
I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.
If we think of the EA cause area landscape in "intervention area" terms, there seems to be a lot more change happening.
undefined @ 2018-01-16T15:58 (+3)
I think this is a good point; you may also be interested in Michelle's post about beneficiary groups, my comment about beneficiary subgroups, and Michelle's follow-up about finding more effective causes.
undefined @ 2018-01-13T18:22 (+2)
are best thought of as target populations than cause areas ... the space not covered by these three is basically just wealthy modern humans
I guess this thought is probably implicit in a lot of EA, but I'd never quite heard it stated that way. It should be more often!
That said, I think it's not quite precise. There's a population missing: humans in the not-quite-far-future (e.g. 100 years from now, which I think is not usually included when people say "far future").
undefined @ 2018-01-16T02:43 (+2)
I agree with Jacy. Another point I'd add is effective altruism is a young movement also focused on making updates and change its goals as new and better info can be integrated into our thinking. This leads to the evolution of various causes, interventions and research projects in the movement undergoing changes which make them harder to describe.
For example, for a long time in EA, "existential risk reduction" was associated primarily with AI safety. In the last few years ideas from Brian Tomasik have materialized in the Foundational Research Institute and their focus on "s-risks" (risks of astronomical suffering). At the same time, organizations like Allfed are focused on mitigating existential risks which could realistically happen on a timeline in the medium-term future, i.e., the next few decades, but the intervention themselves aren't as focused on the far-future, e.g., at least the next few centuries.
However, x-risk and s-risk reduction dominate in EA through AI safety research as the favoured intervention, and with a focus motivated by astronomical stakes. Lumping that altogether could be called a "far future" focus. Meanwhile, 80,000 Hours advocates for the use of the term "long-run future" for a focus on risks extending from the present to the far future which depend on policy regarding all existential risks, including s-risks.
I think finding accurate terminology for the whole movement to use is a constantly moving target in effective altruism. Obviously using common language optimally would be helpful, but debating and then coordinating usage of common terminology also seems like it'd be a lot of effort. As long as everyone is roughly aware of what each other is talking about, I'm unsure how much of a problem this is. It seems professional publications out of EA organizations, as longer reports which can afford the space to define terms, should do so. The EA Forum is still a blog, so that it's regarded as lower-stakes, I think it makes sense to be tolerant of differing terminology, although of course clarifications or expansions upon definitions should be posted to the comments, as above.
undefined @ 2018-01-14T00:35 (+1)
In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/
undefined @ 2018-01-10T12:18 (+12)
Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.
The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. "Oh, I'm an (animal, poverty, AI) person! X-risk aversion!"
"Effective altruism" in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can't be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.
I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.
Everyone here knows there are new causes and wants to accept them, but they don't know that everyone knows there are new causes, etc, a common-knowledge problem. They're waiting for chosen ones to update the leaderboard.
If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let's make the list and put it somewhere prominent for salient access.
Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you're interested in doing these!
undefined @ 2018-01-11T17:57 (+2)
One thing that's very useful about having separate cause areas is that it helps people decide what to study and research in depth, e.g. get a PhD in. This probably doesn't need to be illustrated, but I'll do it anyway:
If you consider two fields of study, A and B, such that A has only one promising intervention, and B has two, and all three interventions are roughly equal in expectation (or whatever other measures are important to you); then it would be better to study B, because if one of its two interventions don't pan out, you can more easily switch to the other; with A, you might have to move onto a new field entirely. Studying B actually has higher expected value than studying A, despite all three interventions being equal in expectation.
undefined @ 2018-01-12T00:11 (+9)
I worry you've missed the most important part of the analysis. If we think what it means for a "new cause to be accepted by the effective altruism movement" that would proably be either:
It becomes a cause area touted by EA organisations like Give Well, CEA, or GWWC. In practice, this involves convincing the leadership of those organisations. If you want to get a new cause in via this route, that's end goal you need to achieve; writing good arguments is a means to that end.
you convince individuals EA to change what they do. To a large extent, this also depends on convincing EA-org leadership, because that's who people look to for confirmation a new cause has been vetted. This isn't necessarily stupid on the part of individual EAs to defer to expert judgement: they might think "Oh, well if so and so aren't convinced about X, there's probably a reason for it".
This seems as good as time as any to re-plug the stuff I've done. I think these mostly meet your criteria, but fail in some key ways.
I first posted about mental health and happiness 18 months ago and explained why poverty is less effective than most will think and mental health more effective. I think I was, at the time, lacking a particular charity recommendation though (I now think Basic Needs and Strong Minds look like reasonable picks); I agree it's important new cause suggestions have 'shovel ready' project.
I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.
Back in August I explain why drug policy reform should be taken seriously as new cause. I agree that lacks a shovel ready project too, but, if anything, I think there was too much depth and rigour there. I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.
undefined @ 2018-01-12T06:02 (+5)
I'm still waiting for anyone to tell me where my EV calcs have gone wrong
For what it's worth, we had some back & forth regarding modeling assumptions around drug policy reform cost-effectiveness:
http://effective-altruism.com/ea/1em/costeffectiveness_analysis_drug_liberalization/bx1
undefined @ 2018-01-12T11:08 (+1)
I remember. I don't think we quite got the bottom of the issue however and couldn't agree what the right counterfactual was.
undefined @ 2018-01-12T16:32 (+4)
Sure, but I don't think the right summary here is "no one has told me how my EV calc is wrong."
A better summary probably includes something like "EV calcs are complicated and their outputs are very sensitive to the modeling assumptions used."
undefined @ 2018-01-12T17:43 (+4)
Yes. I think I was over-selling my point and that was a mistake. Our back and forth was useful and I'll have to think about it again when I look at DPR again.
By way of explanation, I think I was venting my frustrationg at the ratio of "time I spend researching and writing about drug policy reform:serious interest it received"
undefined @ 2018-01-13T12:32 (+4)
I think you're right that having "an organization" talking about X is necessary for X to reach "full legitimacy", but it's worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.
Having even a nominal "project" allows you to collaborate more easily with others and starts to build credibility that isn't just linked to you. I think perhaps you should just start MH&HR.
undefined @ 2018-01-13T19:17 (+1)
Interesting thoughts, actually...
MH&HR.
What does the R stand for?
undefined @ 2018-01-18T21:53 (+1)
"Mental Health and Happiness Research". Coin your own meaningless acronym if you don't like it :)
undefined @ 2018-01-12T16:05 (+4)
I'm still waiting for anyone to tell me where my EV calcs have gone wrong and drug policy reform wouldn't be more cost-effective than anything in GiveWell's repertoire.
One thing I'd note here is that the rigor of GiveWell analysis versus your EV calcs is very different. There are other EV calcs out there with similar rigor that promise significantly higher $/good stuff, such as most stuff in the far future cause-space.
I argued you, whoever you are, probably don't want to donate the Against Malaria Foundation. I explain it's probably a mistake for EAs to focus too much on 'saving lives' at the expense of either 'improving lives' or 'saving humanity'.
I'd also note that GiveWell replied to your argument here: https://blog.givewell.org/2016/12/12/amf-population-ethics/
undefined @ 2018-01-12T17:49 (+2)
the rigor of GiveWell analysis versus your EV calcs is very different
Sort of a side question, but could you say what sort of thing you had in mind? i.e. the particular sense in which GW's calc are rigorous. I ask because I find their assumptions odd/pretty disatisfying and think they leave out loads of stuff. I mean to write about when I find time.
This isn't to say my calculations are more rigorous than theirs. GW have loads more detail.
undefined @ 2018-01-29T15:56 (+1)
I think I'd broadly model rigor on a framework like this as the standard deviation of the estimate of cost-effectiveness when using a X% credibility interval (where X% is consistent across all compared intervals). Models with lower standard deviations can be said to be more rigorous as there are less (known) sources of uncertainty.
undefined @ 2018-01-15T01:51 (+3)
So I think we agree on some things and disagree on others. I think that getting large EA organizations to adopt the cause definitely helps but is but is not necessary. Animal rights as a whole, for example, is not mentioned at all on GiveWell or GWWC and it’s listed as a 2nd tier area by 80,000 Hours (bit.ly/2DdxCqQ), but it is still pretty clearly endorsed by EA as a whole. If by EA orgs you mean EA orgs of any size, I do think that most cause areas that are accepted by the EA movement will get organizations started in it in time. I think that causes like wild animal suffering and positive psychology are decent examples of causes that have gotten some traction without major pre-existing organizations endorsing them. It might also come down to disagreements about definitions of “in EA”.
I almost put your blogs into this post as a positive example of what I wish people would do, but I wanted to keep the post to a lower length. In general, I think your efforts on mental health have updated more than a few EAs in positive directions towards it, including myself. There has been some related external content and research on this topic in part because of your posts and I would put a nontrivial chance on some EAs in the next 1-5 years focusing exclusively on this cause area and starting something in it. In general, I would expect adoption to new causes to be fairly slow and start with small numbers of people and maybe one organization before expanding to be on the standard go-to EA list.
I think if I were to guess what is holding back mental health / positive psych as a cause area it would be having a really strong concrete charity to donate to. By strong charity, I mean strong CEA but also focus on narrow set of interventions, decent evidence base/track record, strong M&E, and decently investigated by an external EA party (would not have to be an org. Could be an individual.) Something like Strong Minds might be a good fit for this.
undefined @ 2018-01-16T03:00 (+1)
I think this is missing some prior steps as to how a cause can be built up in the effective altruism movement. For example, a focus on risks of astronomical future suffering ("s-risks), and reducing wild animal suffering (RWAS), both largely inspired in EA by Brian Tomasik's work, have found success in the German-speaking world and increasingly globally throughout the movement. These are causes which have both have largely circumvented attention from either the Open Philanthropy Project (Open Phil) or the Centre for Effective Altruism (CEA) and its satellite projects (e.g., GWWC, 80,000 Hours, etc.).
Since the beginning of effective altruism, global poverty alleviation and global health have been the biggest focus areas. I witnessed as the movement grew causes were developed through a mix of online coordination on the global level with social networks like Facebook, mailing lists, and fora like LessWrong, and locally or regionally with non-profit organizations focusing on outreach and research. This was the case for both AI safety and farm animal welfare, which proportionally didn't have nearly the representation in EA five years ago that they have now.
Certainly smaller focus areas like s-risk reduction and RWAS are receiving much less attention than others in EA. However, that across multiple organizations each of those causes is respectively funded by between $100k and $1 million USD, largely from individual effective altruists, is proof of concept a cause can be built up without being touted by CEA or Open Phil. And what's more it's not as if the trajectory of these causes looks bleak. They've been building up growth momentum for years, and they're not showing signs of slowing. So how much they achieve increasing success in the near future will provide more data about what's possible in getting a new cause into EA. What's more, at least RWAS is a cause that's on Open Phil's radar. So it's not like grants or endorsements of these causes from Open Phil or CEA couldn't happen in the future.
In general I think developing a cause within the effective altruism community is something which often precedes more focus from it by flagship organizations of the movement, and that the process of development often takes the form of following the kinds of steps Joey outlined above. Obviously there could be more to the process than just that. I'm working on a post to introduce a project which builds on the kinds of steps Joey pointed out, and you've already taken, to organize and coordinate causes in effective altruism.
undefined @ 2018-01-13T12:35 (+2)
Great post!
("CEA" in the post refers to "cost-effectiveness analysis" – maybe explain the term the first time you use it? It can be confusing to those who know the Centre for Effective Altruism but not the abbreviation for cost-effectiveness analysis.)
undefined @ 2018-01-11T08:41 (+1)
I'd also add: Get a group of people together. Easiest way is to create a Facebook Group and promote it. Getting a new cause into EA is a huge amount of work and so you don't want to try to do it single handed.