What are the key ongoing debates in EA?

By richard_ngo @ 2020-03-08T16:12 (+74)

Specifically, I'm interested in cases where people who are heavily involved in effective altruism both disagree about a question, and also currently put non-negligible effort into debating the issue.

One example would be the recent EA forum post Growth and the case against randomista development.

Anecdotal or non-public examples welcome.


Ardenlk @ 2020-03-09T09:32 (+72)

I'm excited to read any list you come up with at the end of this!

Some I thought of:

richard_ngo @ 2020-03-15T15:08 (+46)

Thanks for the list! As a follow-up, I'll try list places online where such debates have occurred for each entry:

1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1

2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.

3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history

4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like

5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don't find either compelling). Also a lot of Simon Knutsson's stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

6a. https://forum.effectivealtruism.org/posts/LxmJJobC6DEneYSWB/effects-of-anti-aging-research-on-the-long-term-future , https://forum.effectivealtruism.org/posts/jYMdWskbrTWFXG6dH/a-general-framework-for-evaluating-aging-research-part-1

6b. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals , https://forum.effectivealtruism.org/posts/ndvcrHfvay7sKjJGn/human-and-animal-interventions-the-long-term-view

6c. https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1

7. Nothing particularly comes to mind, although I assume there's stuff out there.

8. https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

9. E.g. here, which also links to more discussions: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for

Louis_Dixon @ 2020-03-30T11:37 (+1)

Re: 9 - I wrote this back in April 2019. There have been more recent comments from Will in his AMA, and Toby in this EA Global talk (link with timestamp).

Khorton @ 2020-03-08T17:38 (+55)

Should EA be large and welcoming or small and weird? Related: How important is it for EAs to follow regular social norms? How important is diversity and inclusion in the EA community?

To what extent should EA get involved in politics or push for political change?

John_Maxwell @ 2020-03-12T04:24 (+29)

I just want to note that in principle, large & weird or small & welcoming movements are both possible. 60s counterculture was a large & weird movement. Quakers are a small & welcoming movement. (If you want to be small & welcoming, I guess it helps to not advertise yourself very much.)

I think you are right that there's a debate around whether EA should be sanitized for a mass audience (by not betting on pandemics or whatever). But e.g. this post mentions that caution around growth could be good because growth is hard to reverse, but I don't see weirdness advocacy.

Evan_Gaensbauer @ 2020-03-18T07:02 (+5)

Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I'm also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less 'weird', like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.

Linch @ 2020-03-10T08:20 (+13)

See also: https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/

vaidehi_agarwalla @ 2020-03-09T09:56 (+7)

Expanding on those points: Should EA be small and elite (i.e. to influence important/powerful actors) or broad and welcoming? How many people should earn to give and how effective is this on the margin? (Maybe not a huge debate but a lot of uncertainty) How much/should we grow EA in non-Western countries? (I think there's a fair deal of ignorance on this topic overall)

Related to D&I: How important is academic diversity in EA? And what blindspots does the EA movement have as a result?

I don't think all of these have been always publicly discussed, but there is definitely a lack of consensus and differing views.

willbradshaw @ 2020-03-09T12:34 (+2)

What does "academic diversity" mean? I could imagine a few possible interpretations.

vaidehi_agarwalla @ 2020-03-09T23:30 (+4)

Getting people from non-STEM backgrounds, specifically non-econ social sciences and humanities.

technicalities @ 2020-03-09T15:00 (+1)

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

willbradshaw @ 2020-03-09T19:00 (+3)

I think there's quite a large diversity in what people in EA did in undergrad / grad school. There's plenty of medics and a small but nontrivial number of biologists around, for example.

What they wish they'd done at university, or what they're studying now, might be another matter.

Khorton @ 2020-03-08T23:47 (+4)

Along the same lines of community health and movement growth, in what situations should individual censor their views or expect to be censored by someone else (eg a Forum moderator or Facebook group admin)?

Linch @ 2020-03-10T08:18 (+32)

Among long-termist EAs, I think there's a lot of healthy disagreement about the value-loading (what utilitarianism.net calls "theories of welfare") within utilitarianism. Ie, should we aim to maximize positive sentient experiences, should we aim to minimize negative sentient experiences, or should we focus on complexity of value and assume that the value loading may be very complicated and/or include things like justice, honor, nature, etc?

My impression is that the Oxford crowd (like Will MacAskill and the FHI people) are most gung ho about the total view and the simplicity needed to say pleasure good, suffering bad. It helps that past thinkers with this normative position have a solid track record.

I think Brian Tomasik has a lot of followers in continental Europe, and a reasonable fraction of them are in the negative(-leaning) crowd. Their pitch is something like "in most normal non-convoluted circumstances, no amount of pleasure or other positive moral goods can justify a single instance of truly extreme suffering."

My vague understanding is that Bay Area rationalist EAs (especially people in the MIRI camp) generally believe strongly in the complexity of value. A simple version of their pitch might be something like "if you could push a pleasure button to wirehead yourself forever, would you do it? If not, why are you so confident about it being the right recourse for humanity?"

Of the three views, I get the impression that the "Oxford view" gets presented the most for various reasons, including that they are the best at PR, especially in English speaking countries.

In general, a lot of EAs in all three camps believe something like "morality is hard, man, and we should try to avoid locking in any definitive normative results until after the singularity." This may also entail a period of time (maybe thousands of years) on Earth to think through things, possibly with the help of AGI or other technologies, before we commit to spreading throughout the stars.

I broadly agree with this stance, though I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Matthew_Barnett @ 2020-03-13T08:10 (+5)
I suspect the reflection is going to be mostly used by our better and wiser selves on settling details/nuances within total (mostly hedonic) utilitarianism rather than discover (or select) some majorly different normative theory.

Is this a prediction, or is this what you want? If it's a prediction, I'd love to hear your reasons why you think this would happen.

My own prediction is that this won't happen. But I'd be happy to see some reasons why I am wrong.

MichaelStJules @ 2020-03-08T18:59 (+26)

Normative ethics, especially population ethics, as well as the case for longtermism (which is somewhere between normative and applied ethics, I guess). Even the Global Priorities Institute has research defending asymmetries and against longtermism. Also, hedonism vs preference satisfaction or other values, and the complexity of value.

Consciousness and philosophy of mind, for example on functionalism/computationalism and higher-order theories. This could have important implications for nonhuman animals and artificial sentience. I'm not sure how much debate there is these days, though.

willbradshaw @ 2020-03-09T19:11 (+4)

You mention you're not sure how much debate there is around consciousness these days. Surprisingly I'd say the same is increasingly true of normative ethics.

There's still a lot of disagreement about value systems, but most people seem to have stopped having that particular argument, at least as regards total vs negative utilitarianism (which I'd say was the biggest such debate going on a few years ago).

algekalipso @ 2020-03-11T01:51 (+25)

Whether avoiding *extreme suffering* such as cluster headaches, migraines, kidney stones, CRPS, etc. is an important, tractable, and neglected cause. I personally think that due to the long-tails of pleasure and pain, and how cheap the interventions would be, focusing our efforts on e.g. enabling cluster headaches sufferers to access DMT would prevent *astronomical amounts of suffering* at extremely low costs.

The key bottleneck here might be people's ignorance of just *how bad* these kinds of suffering are. I recommend reading the "long-tails of pleasure and pain" article linked above to get a sense of why this is a reasonable interpretation of the situation.

Stefan_Schubert @ 2020-03-09T10:25 (+21)

Whether we're living at the most influential time in history, and associated issues (such as the probability of an existential catastrophe this century).

willbradshaw @ 2020-03-09T19:07 (+18)

I think quite a few people here are interpreting this question to be one of either

"What is the issue about which I personally disagree with what I perceive to be EA orthodoxy?"

or

"What seemingly-EA-relevant issues am I personally most confused/uncertain about?"

Either of which could be a good question to answer, but not necessarily here (though the second one seems like a better substitution than the first).

RomeoStevens @ 2020-03-09T00:57 (+13)

Whether or not EA has ossified in its philosophical positions and organizational ontologies.

OllieBase @ 2020-03-11T10:11 (+8)

Could you spell out what this means? I'd guess that most people (myself included) aren't familiar with ossification and organizational ontologies.

willbradshaw @ 2020-03-11T20:08 (+6)

I suspect this may be evidence in itself that this is not currently a key ongoing debate in EA.

RomeoStevens @ 2020-03-12T06:47 (+7)

Ah, key = popular, I guess I can simplify my vocabulary. I'm being somewhat snarky here, but afaict it satisfies the criteria of significant effort has gone into debating this.

technicalities @ 2020-03-08T17:00 (+13)

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

Linch @ 2020-03-08T21:53 (+4)

What's the new evidence? I haven't been keeping up with the worm wars since 2017. Is there more conclusive data or studies since?

alexrjl @ 2020-03-12T15:49 (+10)

I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 2016/7. My WASH piece had almost no response, which might suggest that people just aren't too bothered by worms any more, though it could equally be something unrelated like style.

I think there's a reasonable case to be made that discussion and interest around worms is dropping though, as people for whom the "low probability of a big success" reasoning is convincing seem likely to either be long-termists, or to have updated towards growth-based interventions.

technicalities @ 2020-03-09T13:28 (+1)

Not sure. 2017 fits the beginning of the discussion though.

Linch @ 2020-03-12T06:29 (+2)

I thought most of the fights around the worm wars were in 2015 [1]? I really haven't been following.

[1] https://chrisblattman.com/2015/07/24/the-10-things-i-learned-in-the-trenches-of-the-worm-wars/

eFish @ 2020-04-02T15:53 (+12)

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

Jamie_Harris @ 2020-03-15T19:08 (+10)

"Assuming longtermism, are "broad" or "narrow" approaches to improving the value of the long-term future more promising?"

This is mostly just a broadening of one of Arden's suggestions: "Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?" Not sure how widely debated this still is, but examples include 1, 2, and 3.

Partly relatedly, I find Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy" a really helpful resource for keeping track of the most important evidence and arguments on important questions, and I've wondered whether a comparable resource would be helpful for the effective altruism community more widely.

Nathan Young @ 2020-05-28T10:37 (+6)

I think the answers here would be better if they were split up into points. That way we could vote on each separately and the best would come to the top.

Milan_Griffes @ 2020-03-09T05:01 (+5)

Whether or not psychedelics are an EA cause area.

Psychedelics posts on the Forum in 2019:

Khorton @ 2020-03-09T08:50 (+23)

(I don't think this is considered a debate by most people - my read is that less than 5% of people involved with EA consider psychedelics a plausible EA cause area, possibly less than 1%)

John_Maxwell @ 2020-03-12T04:11 (+9)

"View X is a rare/unusual view, and therefore it's not a debate." That seems a little... condescending or something?

How are we ever supposed to learn anything new if we don't debate rare/unusual views?

willbradshaw @ 2020-03-12T10:10 (+18)

I simultaneously have some sympathy for this view and think that people responding to this question by pushing their pet cause areas aren't engaging well with the question as I understand it.

For example, I think that anti-ageing research is probably significantly underrated by EAs in general and would happily push for it in a question like "what cause areas are underrated by EAs", but would not (and have not) reference it here as a "key ongoing debate in EA", because I recognise that many people who aren't already convinced wouldn't consider it such.

So one criterion I might use would be whether disputants on both sides would consider the debate to be key.

I also agree with point (2) of Khorton's response to this.

willbradshaw @ 2020-03-12T12:53 (+12)

Thinking about this more, I suspect a lot of people would agree that some more general statement, like "What important cause areas is EA missing out on?" is a key ongoing debate, while being sceptical about most specific claimants to that status (because if most people weren't sceptical, EA wouldn't be missing out on that cause area).

Khorton @ 2020-03-12T09:00 (+8)

I think this is two different things:

  1. yes I was being a bit condescending, sorry
  2. I wasn't trying to say what should be a debate; I was trying to lend accuracy to the discussion of what is a key debate in the EA community.
John_Maxwell @ 2020-03-13T06:51 (+6)

Apology accepted, thanks. I agree on point 2.

willbradshaw @ 2020-03-09T12:36 (+5)

I definitely don't think it would generally be considered a key debate.

Milan_Griffes @ 2020-03-09T18:10 (+2)

I think it's closely related to key theoretical debates, e.g. Romeo's answer and Khorton's answer on this thread.

Milan_Griffes @ 2020-03-09T18:12 (+2)

fwiw my read on that is ~15-35%, but we run in different circles

Buck @ 2020-03-10T04:13 (+14)

I'm interested in betting about whether 20% of EAs think psychedelics are a plausible top EA cause area. Eg we could sample 20 EAs from some group and ask them. Perhaps we could ask random attendees from last year's EAG. Or we could do a poll in EA Hangout.

Linch @ 2020-03-10T08:03 (+19)

We may need to operationalize "top EA cause area" more precisely but I would concur with Buck/also bet money odds that <20% of a reasonable random sample of EAs will not answer a question like "in 2025, will psychedelics normalization be a top 5 priority for EAs?" in the affirmative.