Who is working on finding "Cause X"?

By Milan_Griffes @ 2019-04-10T23:09 (+19)

As a community, EA sometimes talks about finding "Cause X" (example 1, example 2).

The search for "Cause X" featured prominently in the billing for last year's EA Global (a).

I understand "Cause X" to mean "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar."

This afternoon, I realized I don't really know how many people in EA are actively pursuing the "search for cause X." (I thought of a couple people, who I'll note in comments to this thread. But my map feels very incomplete.)


Emanuele_Ascani @ 2019-04-14T08:52 (+15)

In my understanding "Cause X" is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar".

I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of "recognition"a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:

Cause areas that I think don't fit the definition above:

But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I'm not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.

Milan_Griffes @ 2019-04-14T16:36 (+4)
I believe you could argue that every organisation devoted to finding new potential cause areas is.

What organizations do you have in mind?

Emanuele_Ascani @ 2019-04-14T21:07 (+5)

Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn't mean "devoted exclusively to finding new potential cause areas".

technicalities @ 2019-04-11T19:59 (+14)

One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don't think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.

Khorton @ 2019-04-11T20:30 (+3)

Their website suggests it wasn't independent.

'The primary issue for OPIS is the ethical imperative to reduce suffering. Linked to the effective altruism movement, they choose causes that are most likely to produce the largest impact, determined by what Leighton calls “a clear underlying philosophy which is suffering-focused”.'

badbadnotgood @ 2019-04-12T20:01 (+3)

I may be wrong, but I remember reading an EA profile report and seeing Leighton comment that the profile report inspired OPIS's movement toward working on the problem.

Milan_Griffes @ 2019-04-10T23:10 (+13)

Michael Plant's cause profile on mental health seems like a plausible Cause X.

Denkenberger @ 2019-04-13T06:48 (+12)

I think alternate foods for catastrophes like nuclear winter is a cause X (disclaimer, co-founder of ALLFED).

Milan_Griffes @ 2019-04-13T17:17 (+3)

Thanks!

Very curious why this was downvoted. (This idea has been floated before, e.g. on the 80,000 Hours podcast, and seems like a plausible Cause X.)

Milan_Griffes @ 2019-04-10T23:11 (+11)

Wild-animal-suffering research seems like a plausible Cause X.

Milan_Griffes @ 2019-04-10T23:11 (+10)

Founders Pledge cause report on climate change seems like a plausible Cause X.

Denkenberger @ 2019-04-20T22:17 (+9)

I think working on preventing collapse of civilization given loss of electricity/industry due to extreme solar storm, high altitude electromagnetic pulses and narrow AI computer virus is a cause X (disclaimer, co-founder of ALLFED).

Evan_Gaensbauer @ 2019-04-17T01:33 (+8)

I've always thought of "Cause X" as a theme for events like EAG that are meant to prompt thinking in EA, and wasn't ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don't think it ever should have been. I don't think it should be treated as such either. I don't see how it makes sense to anyone as a practical pursuit.

There have been some cause prioritization efforts that took 'Cause X' seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.

Since the question became reformulated as "Is x-risk reduction Cause X?," much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

In general, I've never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.

While they're disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.

It's taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: 'What is Cause X?'

They're not brought to attention much, but there are sources outlining what the 'fundamental assumptions' of EA are (what are typically called 'EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:

1. If one is confident one's current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.

2. If one is confident one's current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn't know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.

3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.

Milan_Griffes @ 2019-04-17T17:50 (+4)
As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

https://www.openphilanthropy.org/research/cause-reports

Milan_Griffes @ 2019-04-17T17:48 (+4)
I don't see how it makes sense to anyone as a practical pursuit.

GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.

That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority.

Pretty strongly disagree with this. I think there's a strong case for x-risk being a priority cause area, but I don't think it dominates all other contenders. (More on this here.)

Evan_Gaensbauer @ 2019-04-19T04:32 (+4)

The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.

Evan_Gaensbauer @ 2019-04-19T04:27 (+4)

Givewell's and Open Phil's worked wasn't termed 'Cause X,' but I think a lot of the stuff you're pointing to would've started before 'Cause X' was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:

  • institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
  • small, private non-profit organizations like Rethink Priorities.

Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn't know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.

Ramiro @ 2019-04-15T22:19 (+6)

This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an "Effective Thesis Prize", to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.

Milan_Griffes @ 2019-04-13T00:29 (+6)

The Qualia Research Institute is a good generator of hypotheses for Cause X candidates. Here's a recent example (a).

Halffull @ 2019-04-12T12:53 (+6)

Rethink Priorities seems to be the obvious organization focused on this.

Milan_Griffes @ 2019-04-12T17:02 (+8)

From their website:

Right now, our research agenda is primarily focused on:
prioritization and research work within interventions aimed at nonhuman animals (as research progress here looks uniquely tractable compared to other cause areas)
understanding EA movement growth by running the EA Survey and assisting LEAN and SHIC in gathering evidence about EA movement building (as research here looks tractable and neglected)

Sounds like they're currently focused on new animal welfare & community-building interventions, rather than finding an entirely different cause area.

Peter_Hurford @ 2019-04-14T23:29 (+14)

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.

Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.

Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.

Milan_Griffes @ 2019-04-15T17:20 (+2)

Thanks!

Is there a public-facing prioritized list of Rethink Priorities projects? (Just curious)

Peter_Hurford @ 2019-04-15T21:03 (+5)

Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019

We're working on writing up an update.

kbog @ 2019-04-11T20:27 (+6)

Between this, some ideas about AI x-risk and progress, and the unique position of the EA community, I'm beginning to think that "move Silicon Valley to cooperate with the US government and defense on AI technology" is Cause X. I intend to post something substantial in the future.

Peter_Hurford @ 2019-04-11T06:52 (+5)

Me.

anonymous_ea @ 2019-04-12T17:27 (+14)

Can you expand on this answer? E.g. how much this is a focus for you, how long you've been doing this, how long you expect to continue doing this, etc.

Peter_Hurford @ 2019-04-14T23:30 (+6)

I'd refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj

anonymous_ea @ 2019-04-15T17:25 (+1)

The link didn't work properly for me. Did you mean the following comment?

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.
Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
Peter_Hurford @ 2019-04-15T21:02 (+3)

Yep :)

aarongertler @ 2019-04-12T10:18 (+4)

GiveWell is searching for cost-competitive causes in many different areas (see the "investigating opportunities" table).

Milan_Griffes @ 2019-04-12T17:07 (+2)

Good point. Plausibly this is Cause X research (especially if they team up with Mark Lutter & co.); I'll be curious to see how far outside their traditional remit they go.

agdfoster @ 2019-04-15T21:07 (+3)

Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.