EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones)

By Amber Dawn @ 2022-09-01T17:45 (+37)

You can be effectively altruistic at doing lots of things
 

[I drafted this several months ago so the ‘topical’ references are out of date, but the broad point still stands. I guess this is a strong opinion, weakly held; tell me in the comments why I’m wrong, or tell me about people who are already doing what I suggest]. 

 

A few days after Roe v Wade was overturned, a Forum user submitted a question asking ‘What actions [are] most effective if you care about reproductive rights in America?’ Before tumbling unmourned into the oblivion of low-karma Forum posts, this question received some supportive responses, as well as some pushback: in particular, Larks said ‘I don't think we should encourage people to post about their personal cause with indifference to whether it could be highly effective’, and expressed the opinion that abortion access ‘seem[s] unlikely to be a top priority by cause-neutral EA lights’. 

I agree that increasing access to abortion in rich countries is not likely to be able to compete with animal suffering, neglected diseases, or existential risk on any reasonable ITN framework. But I disagree that people shouldn’t post about more ‘personal’ causes on the Forum. In this post, I argue that it’s both valid and useful for (some) EAs to research cost-effective interventions for problems outside of the canonical EA cause areas, even if they acknowledge that those problems are not nearly as important, neglected or tractable as the traditional EA causes. Because having a noun is grammatically useful and EA clearly doesn’t have enough jargony acronyms, I’m going to call this sort of content ‘less-important cause-area analysis’, or LICA. 

 

I’m not arguing that EAs who are currently working on central cause areas should drop everything to produce LICA. What I am arguing is:
 

 

If you’re into feminist activism as well as EA, do an EA-flavoured analysis of organisations that tackle domestic violence or systemic bias. If you worked for a homelessness charity before getting into EA, write your thoughts about the most effective ways to combat homelessness in rich cities. If you’re bothered by a brutal armed conflict, think about what the most effective ways to help might be. 

 

I don’t think this opinion is unheard-of. I’m aware that some EA individuals and organizations are doing exactly what I’m recommending, for example:
 

However, since I’ve seen some pushback to this idea, and in practice LICAs seem to be rare and under-promoted relative to work in the central, canonical cause areas, I think it’s worth laying out the case for research into less-central problems.

What sort of problem are you talking about?

Anything that any EA thinks is a serious problem in the world (even if it’s not among the most pressing). For example:

 

-racism, sexism, homophobia, transphobia, other bigotry

-(relative) poverty in rich countries

-institutional corruption

-interpersonal violence

-’systemic’ problems in politics or the economy 
 

I was inspired to write this by the Roe v Wade overturning (yes, this post has indeed languished in my drafts for a long time :p), and I often think about this when there is some big crisis in the news that all of my non-EA Facebook friends are posting about. I think ‘it would be cool if EAs worked out the best way to help with this, so I could recommend something actually-useful to my friends who care about this issue!’ But I wouldn’t limit this to problems that make the news: it could be anything that any EA thinks is a significant and somewhat-tractable problem. 

Why should EAs research less-important problems?

More impact

Straightforwardly, I think we leave impact on the table when we don’t produce LICAs.

Imagine that every time there was a big crisis in the news, some EAs produced well-researched, sensible lists of the most plausibly-effective ways for people to help with that crisis. The lists would be produced voluntarily by EAs who were passionate about or informed about the cause, and shared widely by other EAs. 
 

For example,

-when abortion is outlawed, produce lists of ways to fight against the decision 

-if there is a high-profile war or genocide, produce lists of the best ways to help the victimized people

-If there is a racist attack, produce lists of the best ways to concretely support people in the attacked group

 

This would be a great way to harness an immense amount of altruistic energy which currently gets (largely) frittered away by people donating to ineffective organisations or taking ineffective actions.

For a fictional example: imagine Jo, a middle-aged liberal woman from the US. She was a radical feminist in the 70s and has attended abortion protests since her teens; she had an abortion herself in her twenties. She remembers dancing jubilantly when Roe v Wade was first passed, and she’s horrified that we’re going backwards. She’s desperate to do what she can to help with this terrible crisis for women. She’s heard of Effective Altruism a bit, but she’s not very interested in it. EAs seem to have their hearts in the right place, but it all seems a bit abstract and blokey and philosophical to her. 

You’re unlikely to persuade Jo to stop putting her time, energy and money into reproductive rights, and instead to support one of the central EA causes. But you’re more likely to be able to persuade her to support more-effective interventions or charities within US reproductive rights. Jo really cares about this - she really does want to help more people have access to safe abortions, and she’s seen first-hand how some organizations look impressive, but don’t actually help pregnant people in need. 

I think that if EAs assessed charities working on reproductive rights and promoted the top ones, that could direct many people’s time and donations towards more-effective interventions and away from less-effective ones. And, as in other cause areas, it’s possible that in reproductive rights impact is heavy-tailed and some interventions are orders of magnitude larger than others.
 

I’ve used reproductive rights in the US as a topical (ha) and vivid example, but I think that this is true for any number of social problems that are serious, but that are not traditionally prioritized by EAs because they don’t score highly on the ITN framework. In short, producing LICAs could increase the amount of impactful work that is done to address serious problems - which are still worth solving even if they’re not the most pressing.

It could help EA reach more people

As well as directing donations and time towards effective organizations, producing LICAs might be an effective way to introduce the EA mindset and worldview to people who might be receptive to it. Most people do care about impact and effectiveness: if you ask pretty much anyone ‘do you want to do good effectively, or ineffectively?’ they’ll be like, ‘uh…effectively, obviously?’


On the other hand, most people aren’t super-receptive to ‘hey, you know this serious social issue that you care about deeply and that you’ve worked on for years? It’s probably not the optimal thing for you to be working on’. If we produce EA recommendations in non-EA cause areas, this is a way to introduce EA ways of thinking to people in a way that validates their existing intuitions and history. 

Some people who like the recommendations might become excited about EA generally, join the movement, and even switch to working on more pressing central EA causes over time. Many EAs have changed their altruistic priorities through their interaction with the community. However, even if they don’t do this, they’ll still be doing more good than they otherwise would have done, which is a win. Even if some of the people reading these recommendations are never going to become ‘full EAs’, it still might encourage them to be more reflective about their altruism and give them some tools for how to think about this.

It could improve EA’s reputation

This is related to the point above. If EAs are seen to be coming up with helpful solutions in crises, that might build EA’s reputation as a movement of people who earnestly and impartially want to do good across a variety of domains. There have been some high-profile critiques of EA recently. Many EAs have suggested that rather than responding directly to critics directly, EAs should just produce more positive content. This is all very well, but for that to work, people need to actually see the positive content. I think it would enhance EA’s reputation if EAs were seen to care about the types of absolutely serious, if relatively small, problems that average people care about. 

I think there is a risk here of being manipulative. Ozy Brennan has rightly criticized EAs for strategically introducing EA with less-weird causes so as not to scare people off (a strategy known as ‘milk before meat’). I think it would be bad if EAs produced LICAs for the sole purpose of laundering EA’s reputation or seeming less weird. Instead, my suggestion is that people who already care about certain problems, and/or people who are in a good position to assess interventions in a certain area, produce LICAs if they want…and that they primarily produce them in order to generate more impact. However, a positive consequence of this is that EA would (accurately!) acquire a reputation of being less insular, less dogmatic, and more in touch with what the average person cares about.

You don’t need to establish that something beats existing causes before you start working on it

If it became the norm to produce and support LICA, this might actually incentivize work on ‘Cause Xs’ that could rival central cause areas.  Here’s an imaginary example:
 

A is a therapist, and is (emotionally) very invested in promoting certain therapeutic techniques and communication strategies. Their background makes them well-placed to work on this sort of thing. Also, they kinda-sorta suspect that techniques promoting better mental health and happiness could rival the top EA cause areas, at least for people who share certain crucial values or beliefs. 

But A worries that this suspicion is just motivated reasoning because this cause is important to them. Everyone around them seems really into existential risk and like…they guess that this really is the most important cause? It just doesn’t really click with them. 

I can imagine that if A is especially conscientious and has some free time, they might start to build a case that increasing access to certain therapies really is comparable to the top causes, because if they can make that case, then they are ‘allowed’ to work on this. In the worst case, they can’t really make an argument for this, and are stuck in a limbo of feeling emotionally motivated to work a cause that is important but not the most important, but feeling epistemically bad about this and unsupported by the community. In the best case, they do write a convincing case for the new cause that maybe generates some discussion, but they’ve spent a lot of time doing that when they could have just started looking into interventions for the problem directly. 


To be clear: it’s good to compare cause areas and to make the case that new cause areas compare to others in terms of potential impact. It would greatly diminish EA if EAs became relativistic and anything-goes. But at the same time, I think it’s counter-productive to say ‘you can’t start working on this thing that you’d be good at working on until you’ve established that it’s comparable to AI risk/malaria nets/factory farming’. 

Objections and responses

This might be impactful, but it’s not as impactful as other things EAs could be doing

Objection: This might be useful and impactful, but it’s not the most useful or the most impactful thing an EA could do. Cost-effectiveness analysis takes a lot of time! If you’re able to produce high-quality cost-effectiveness analysis of reproductive rights interventions, maybe you could do it for more pressing cause areas instead. 

Response: though there are more EA jobs and funding than before, there are still many EAs who want to contribute to the community but can’t find, or haven’t yet found, an impactful career that’s a good fit. As I’ve said, I don’t think people in highly-impactful roles should spend time doing this. But I think lots of EAs could produce LICAs without this trading off against more impactful activity, for example:

-students with some free-time

-people who are unemployed or who work part-time 

-people who are just willing to volunteer some free time 

A big uncertainty I have here is that I don’t know how many hours it would take to produce analyses that are actually useful. I’ve never done cost-effectiveness analysis myself, and I’m uncomfortably aware that this might be coming across as ‘EA should do [extremely challenging and complicated thing that I have no intention of doing].’ From a quick google, Charity Entrepreneurship’s process for assessing interventions seems to take many hundreds of hours. 

Then again, maybe the bar for CE is higher, because they’re trying to found charities in areas that are extremely pressing. Since less-important causes are less important, it might be justifiable to do a much quicker and less rigorous job - this might still be substantially better than nothing.

For example, in May 2021 when Covid cases rose rapidly in India, several EAs produced analyses of Covid charities in India. Since this was done in response to the crisis, I assume they can’t have spent many hundreds of hours on this; but these lists were very useful to me, and I donated to some of those organizations and posted the recommendations on my Facebook, where, I hoped, they might influence the donations of non-EA Facebook friends who were distressed by the crisis. 

EAs will sometimes disagree on whether something is an important problem at all

On the post about abortion access, Larks pointed out some EAs might not even be convinced that improving abortion access is good, let alone a pressing issue. Similarly, in polarizing international conflicts, for example, EAs might disagree about which side is the aggressor and which is the victim. Generally, many newsworthy, emotive societal problems are highly politicized, and EAs tend to be wary of politics.

My response is: so what? EAs disagree with each other about which causes are the most important all the time! That’s kind of our whole deal. If you’re a pro-life EA reading a list of interventions to improve abortion access produced by a pro-choice EA, this doesn’t seem fundamentally different to me than if you’re an EA committed to improving farmed animal welfare reading a post about AI risk. 

Admittedly, this isn’t exactly analogous: pro-lifers and pro-choicers tend to feel extremely adversarial towards each other and struggle to engage with each other charitably, in a way that’s not true for, e.g., EAs who prioritize existential risk vs EAs who prioritize mental health. But I think the difference is one of degree rather than kind. If you don’t believe in the act/omission distinction - which is true for many EAs - you arguably should see EAs who disagree with you as doing something very harmful. I’m not arguing that EAs who disagree should be more vitriolic towards each other; rather, I trust EAs to be able to engage with their characteristic charity and goodwill even on emotive and politicized issues. 

We shouldn’t be driven by our emotions

I speculate that many EAs choose not to respond to emotive newsworthy crises or popular hot-button issues, because they think that this sort of knee-jerk responsiveness to the news cycle and popular sentiment is a fundamentally flawed way to approach altruism and caring. The central EA cause areas are not acute newsworthy crises [unless you have extremely pessimistic AI timelines], but ongoing atrocities - threats of extinction that are ever-present and intensifying, the injustice of poverty and preventable disease, the horror of animal suffering. It is greatly to EA’s credit that we are willing to care about these crucial, terrible, unsexy problems. 

But we should be realistic about the fact that people, in general, are emotionally driven. I certainly am. It would be better, perhaps, if everyone decided what they care about in the (hypothetical) way that the (ideal) EA does: by thinking long and hard about different cause areas, forming an opinion on major philosophical issues, and painstakingly working out what problems they think are most pressing. But that’s not the world we live in. In other areas, EAs are pretty pragmatic about working with existing conditions even if those conditions are suboptimal. This is another area where, in my opinion, EAs should be more willing to meet the world where it’s at.


 

I’m running out of time so I’ll forego a snappy conclusion, but I’m interested in hearing all of your thoughts! Also, a bit of shameless self-promotion - I’m currently working as a writer and copy-editor for EAs, so if you like how this post was written, feel free to message me on the Forum, or fill out this (somewhat out-of-date) Expression of Interest form


ColdButtonIssues @ 2022-09-01T18:37 (+19)

"Imagine that every time there was a big crisis in the news, some EAs produced well-researched, sensible lists of the most plausibly-effective ways for people to help with that crisis. The lists would be produced voluntarily by EAs who were passionate about or informed about the cause, and shared widely by other EAs. "

I agree that working on LICAs can be a good idea for individuals EAs and I think your examples were well-chosen. I disagree that it is a good idea for the EA community or institutions to work on LICAs.

I completely agree that addressing important, but "less important" issues can be a good use of time. If an individual EA could meaningfully improve US food bank management that seems really good, even if rich country food banks aren't an EA cause overall.

For EA institutions to work on these issues is a bad idea, IMO. If Open Phil staked out a position of abortion, for instance, it would alienate a lot of people. The only reason for EA orgs to work on divisive issues is because they are so important that the importance offsets the costs of division.

Amber Dawn @ 2022-09-02T16:15 (+2)

Yeah, that is a good point. It makes a lot of sense for EA orgs to avoid divisive issues, particularly if they are not among the most pressing anyway. 

A friend pointed out elsewhere that if producing LICAs was the norm for institutions, you might end up with institutions producing recommendations on both sides of a contentious social issue - e.g., how to effectively improve abortion access, and how to effectively reduce it. This could be bad both for PR reasons (*everyone* would hate us!) and because different sets of EAs are essentially doing work that cancels each other out.

Tyner @ 2022-09-01T18:57 (+6)

Another organization that is spending some time on this is sogive.org  They have impact assessments for groups like Planned Parenthood and Muslim Aid.

I can provide an anecdotal use case that is maybe not quite tackled in your write up.  My mother-in-law is a retired dentist.  She gives money to the American Dental Association every year.  This strikes me as an ineffective organization mostly because US dentists are typically quite wealthy.  If I told her "forget all that, give your money to Humane League/Helen Keller/Intelligence.org" I think it would be a non-starter.  If I can tell her about a more effective way to help people with dental issues that's a much shorter moral distance for her to travel.  That was part of why I was really pleased to see Founder's Pledge release their report on Oral Health recently https://founderspledge.com/stories/oral-healthcare 

Amber Dawn @ 2022-09-02T16:16 (+2)

Thanks! That's a really good example.

Matt_Sharp @ 2022-09-01T18:53 (+6)

This was intended to be part of SoGive's approach. Alongside ratings of how charities compared to top (~GiveWell) charities, we wanted to identify 'best in cause area' charities. Unfortunately no one wanted to pay us to do this, so we stopped.

One difficulty is the range of cost-effectiveness in many cause areas is likely to be much smaller than (e.g.) for global health. This could mean the best charity is only 2-3x as good as an average charity in that cause area. And unless there is a lot of high-quality evidence, you might expect there to be a big overlap in the confidence intervals for the expected cost-effectiveness of the best and average charities, such that it's not clear which is actually the best.

Amber Dawn @ 2022-09-02T16:19 (+1)

Ah, that's interesting! :( that no-one wanted to pay you to do it. 

Why do you think the range of cost-effectiveness is greater in global health than in many other areas?

Karthik Tadepalli @ 2022-09-03T05:40 (+2)

This is likely if the maximum cost effectiveness is highest in global health compared to other areas. If global health is just a uniquely high leverage area - which is plausible, so many people in poor countries suffer from easily preventable diseases with terrible impacts - then it's just going to have an exceptionally high ceiling compared to areas where the suffering is less preventable or less impactful.

Ben Millwood @ 2022-09-01T18:40 (+6)

Another possible benefit is that doing cost-benefit analyses might make you better at doing other cost -benefit analyses, or give you other transferrable skills or knowledge that are helpful for top-priority causes. I think that for all our enthusiasm about these kinds of assessments, we don't actually as a community produce that many of them. Scaling up the analysis industry might lead to all sorts of improvements in how quickly and accurately we can do them.

Ben Millwood @ 2022-09-01T18:47 (+3)

Though the flipside of this is I think we probably don't have a bunch of people sitting around like "ah, I would do a cost-benefit analysis, but none of the things to analyse are worth my time", so reading this post probably doesn't generate LICAs unless we also figure out what people are missing to be able to do more of this stuff.

I expect partly it's just that doing Real, Important Research is more intimidating than it deserves to be, and it would be useful to try to "demystify" some of this work a bit.

Amber Dawn @ 2022-09-02T16:27 (+1)

Yeah this is a really good point! Something I was kind of aware of while writing this is that I'm a hypocrite - I've never done this. It's probably really hard to do, and probably one reason why people don't do it that much is just 'it will take me ages and ages, no-one is paying me to do, and I have a day job/studies/life to deal with'. 

I would definitely sign up for, like, a 'cost-effectiveness analysis 101 fellowship'. 

Kirsten @ 2022-09-02T10:04 (+5)

Thanks for writing this post Amber! My first and overwhelming reaction is to think of this post: https://forum.effectivealtruism.org/posts/Pz7RdMRouZ5N5w5eE/ea-should-taboo-ea-should

I think you get around this a little by suggesting this is a call for unorganized volunteers (to post their thoughts on the Forum?) but it's still more ambiguous than I would like

Amber Dawn @ 2022-09-02T16:30 (+3)

Yeah, I was thinking of that post! Possibly the title to this post shouldn't even include 'should', but instead 'EAs can, if they want to...'

But then again, although I anticipate this mainly being done by people who feel some intrinsic motivation, maybe I do think that it's something the EA community "should" do more of?

I think it wouldn't be a bad idea for EA orgs to do some of this, though as ColdButtonIssues said above, it might be a good idea to avoid doing it for extremely divisive issues. 

 

Stewed_Walrus @ 2022-09-01T21:17 (+5)

Completely agree. 

I'd also add that there are plenty of situations where individuals are required to disperse funds which have already been earmarked for spending on a particular cause. For example, many large businesses have staff "LGBT+ inclusion committees" and provide funds which these committees are expected to donate to LGBT+ charities.

In this situation, whether or not charities focussed on improving the lives of LGBT+ people are more or less effective than others is irrelevant. The funds are earmarked for that purpose and so the task of the committee members is to determine which charities that fall under their remit are most impactful. 

If discussion relating to how to do so is not welcome and encouraged in EA spaces then there is a risk that such evaluation won't take place at all.

Amber Dawn @ 2022-09-02T16:21 (+1)

This is a good point, thanks!

Robin @ 2022-09-01T22:14 (+4)

Strong agree, and part of this is just that EAs should be more modest about how much their assessments of sector impact out-perform other people's. In the long term, weird second-order social impacts of interventions matter a lot more than the direct impact. For example, the (disputed) effects of abortion on crime rates https://www.sciencedirect.com/science/article/pii/S0047272721001043?via%3Dihub and female employment/social engagement https://link.springer.com/article/10.1007/s12122-004-1028-3 may create social spirals that continue long after the medical harm of the pregnancy and therefore considerably bump abortion rights up the virtual list of longtermist goals, but these effects are very hard to assess in simple models.

Amber Dawn @ 2022-09-02T16:22 (+1)

Thanks Robin! Very good points.

Karthik Tadepalli @ 2022-09-02T14:40 (+2)

I once had this argument with my friend, who convinced me against this position with the problem of redirecting resources. In theory, some resources are locked into a cause area (e.g. donors are committed to abortion) and some are not (donors are willing to change cause areas). Finding the best giving area within a cause will increase the efficiency of resources that are locked into that cause, but it will also encourage some amount of redirection. IIRC, when GiveDirectly introduced cash transfers for the US, their normal arms actually lost donations despite donations overall being way up during COVID. That's an example that demonstrates the worry that people will direct their money to less important areas if you give them a winning donation opportunity within that area.

Amber Dawn @ 2022-09-02T16:35 (+1)

Hmm yes, that's interesting! I'd be interested to know how much this happens.