Talking about longtermism isn't very important

By cb @ 2025-10-20T13:15 (+36)

Epistemic status: quickly writtren, rehashing (reheating?) old, old takes. Also, written in a grumpier voice than I’d endorse (I got rained on this morning).

Some essays on longtermism came out recently! Perhaps you noticed. I overall think these essays were just fine[1], and that we should all talk less about longtermism.

In which I talk about longtermism

(In what follows, I’ll take “longtermism” as shorthand for: “the effects of our actions on the long term future should be a key moral priority.")

Critics often have two[2] broad kinds of objections to longtermism:

  1. It's too revisionary or radical in its implications
  2. It’s not action-guiding, it’s irrelevant in its implications

Here I’ll say a bit more on (2). Specifically, I’m going to argue that (A) longtermism isn’t necessary to motivate most high priority work, and that ((B) for the work longtermism might be necessary to motivate, talking about object-level features of the world[3] is more useful than debating the abstract framework. Given this, I think we should all talk less about longtermism.

Longtermism doesn’t distinctively motivate much work

Okay, so what does longtermism distinctively motivate? Some notes.

Longtermists act like normal people, mostly[4]

What about work that seemingly does need longtermism?

As I've discussed above, work on reducing existential risk does not need longtermism to motivate it. However, Better Futures-style work aimed at improving the value of the far future seems to me like it probably will need longtermism to be a key moral priority.[5]

I think that even here, it's more useful to talk about specific features of the world rather than to continue debating whether longtermism is true in general. Concretely, I'm most excited about work that tries to identify the actions one should take if you're very compelled by better-future-style reasoning, bearing in mind the difficulty of predicting or influencing the future. (See this comment for similar thoughts.)

Some concrete recommendations

So what should we do instead of debating longtermism? Some ideas.

  1. ^

     Here’s a footnote in a cranky voice (apologies). I was pretty underwhelmed with the Essays on Longtermism collection. I broadly agree with Oscar that the articles were either (1) reprints of classic essays which had some relationship to longtermism, or (2) new work, which mostly didn't seem to succeed at being novel and plausibly true and important.

    I guess more accurately, I thought the essay collection was just fine, looked good by academic standards, was probably a decent idea ex ante, and that there is nothing very interesting to say about the essays. So mostly I'm like "hm, I don't really get why there was an EAF contest to write about them".

    I think the collection can still have some academic value, e.g. by:

    • Making it higher status (and better for your academic career) to discuss longtermism-related ideas
    • Collecting some classic foundational essays (and again, making it easier to cite them in academic work)
    • Broadening the base of support for longtermism, or assessing how robust longtermism is to different moral views (e.g. deontological perspectives, contractualism)

    My overall gripe is: longtermism doesn't seem very important. I think it would have been better to collect essays on a particular intervention longtermists are often interested in, rather than about an axiological claim which (I argue) doesn't really matter for prioritisation.

  2. ^

     Setting aside “it’s false, but not because it’s revisionary, for some other reason”.

  3. ^

     E.g., discussing reasons to think this problem in particular must be dealt with now, rather than delegated to future, wiser people to solve; arguing why some actions will likely have persistent, predictable, and robustly good effects.

  4. ^

     They look just like me and you! Your friends, colleagues, and neighbours may even be longtermists…

  5. ^

     I'm not making the claim that BF-style work definitely will need longtermism to be motivated. My impression is that lots of the interventions recommended by this work are still quite abstract and general, and I think it's possible that as we drill down into the details and look more for actions with predictable, persistent, robustly good effects, the kinds of actions that a BF-style longtermist will recommend might look very similar to the kinds of actions that non-longtermists recommend. (E.g.:  strengthening institutions, reducing the risks of concentration of power, generally preserving optionality beyond just non-extinction optionality.) However, my current guess is that there will be some things that BF-style researchers are excited about, for which you basically do need to be a longtermist in order to consider them key moral priorities.


Elliott Thornley (EJT) @ 2025-10-20T14:06 (+19)

Whether longtermism is a crux will depend on what we mean by 'long,' but I think concern for future people is a crux for x-risk reduction. If future people don't matter, then working on global health or animal welfare is the more effective way to improve the world. The more optimistic of the calculations that Carl and I do suggest that, by funding x-risk reduction, we can save a present person's life for about $9,000 in expectation. But we could save about 2 present people if we spent that money on malaria prevention, or we could mitigate the suffering of about 12.6 million shrimp if we donated to SWP.

Matrice Jacobine @ 2025-11-08T15:52 (+3)

This seems clearly wrong. If you believe that it would take a literal Manhattan project for AI safety ($26 billion adjusting for inflation) to reduce existential risk by a mere 1% and only care about the current 8 billion people dying, then you can save a present person's life for $325, swamping any GiveWell-recommended charity.

cb @ 2025-10-20T14:27 (+3)

Whether longtermism is a crux will depend on what we mean by 'long'

Yep, I was being imprecise. I think the most plausible (and actually believed-in) alternative to longtermism isn't "no care at all for future people", but "some >0 discount rate", and I think xrisk reduction will tend to look good under small >0 discount rates.

I do also agree that there are some combinations of social discount rate and cost-effectiveness of longtermism, such that xrisk reduction isn't competitive with other ways of saving lives. I don't yet think this is clearly the case, even given the numbers in your paper — afaik the amount of existential risk reduction you predicted was pretty vibes-based, so I don't really take the cost-effectiveness calculation it produces seriously. (And I  haven't done the math myself on discount rates and cost-effectiveness.) 

Even if xrisk reduction doesn't look competitive with e.g. donating to AMF, I think it would be pretty reasonable for some people to spend more time thinking about it to figure out if they could identify more cost-effective interventions. (And especially if they seemed like poor fits for E2G or direct work.)

Elliott Thornley (EJT) @ 2025-10-20T20:15 (+4)

Makes sense! Unfortunately any x-risk cost-effectiveness calculation has to be a little vibes-based because one of the factors is 'By how much would this intervention reduce x-risk?', and there's little evidence to guide these estimates.

calebp @ 2025-10-20T20:24 (+14)

A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:

cb @ 2025-10-21T17:08 (+3)

Thanks for commenting!

I've tried to spell out my position more clearly, so we can see if/where we disagree. I think:

  • Most discussion of longtermism, on the level of generality/abstraction of "is longtermism true?", "does X moral viewpoint support longtermism?", "should longtermists care about cause area X?" is not particularly useful, and is currently oversupplied.
  • Similarly, discussions on the level of abstraction of "acausal trade is a thing longtermists should think about" are rarely useful.
  • I agree that concrete discussions aimed at "should we take action on X" are fairly useful. I'm a bit worried that anchoring too hard on longtermism lends itself to discussing philosophy, and especially discussing philosophy on the level of "what axiological claims are true", which I think is an unproductive frame. (And even if you're very interested in the philosophical "meat" of longtermism, I claim all the action is in "ok but how much should this affect our actions, and which actions?", which is mostly a question about the world and our epistemics, not about ethics.)
  • "though I'd be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level." —this is helpful to know! I would not be excited about this, so we disagree at least here :)
  • "The track record of talking about longtermism seems very strong" —yeah, agree longtermism has had motivational force for many people, and also does strengthen the case for lots of e.g. AI safety work. I don't know how much weight to put on this; it seems kinda plausible to me that talking about longtermism might've alienated a bunch of less philosophy-inclined but still hardcore, kickass people who would've done useful altruistic work on AIS, etc. (Tbc, that's not my mainline guess; I just think it's more like 10-40% likely than e.g. 1-4%.)
  • "I feel like this post is more about "is convincing people to be longermists important" or should we just care about x-risk/AI/bio/etc." This is fair! I think it's ~both, and also, I wrote it poorly. (Writing from being grumpy about the essay contest was probably a poor frame.) I am also trying to make a (hotter?) claim about how useful thinking in these abstract frames is, as well as a point on (for want of a better word) PR/reputation/messaging. (And I'm more interested in the first point.)
calebp @ 2025-10-22T12:28 (+4)

Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to "longtermism" and it's relatively easy to convince people that x-risk/AIS/whatever is important.

I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don't think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all. 
[1]
I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it's obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.

So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I'm like ... weakly increases (?) and there aren't many other leveraged interventions for getting people think about the future.

I would be much more excited about competitions like:
1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start).
2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.

etc. 


Also, somewhat unrelated to the above, but I suspect that where "philosophy" starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as "philosophy", though I'm not arguing that either of those specific discussions is particularly important.

P.S. fwiw I don't think the writing style in this post was particularly poor, or that you came across as grumpy
 

  1. ^

    I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don't then take very thoughtful or altruistic actions.

OscarD🔸 @ 2025-10-20T18:38 (+7)

I would like to separate out two issues:

  1. Is longtermism a crux for our decisions?
  2. Should we spend a lot of time talking about longtermist philosophy?

On 1, I think it is more crux-y than you do, probably (and especailly that it will be in the future). I think currently, there are some big 'market' inefficiencies where even shortermists don't care as much as idealised versions of their utility functions would. If shortermists institutions start acting more instrumentally rationally, lots of the low-hanging fruit of x-risk reduction interventions will be taken, and longtermists will need to focus specifically on the weirder things that are more specific to our views. E.g. ensuring the future is large, and that we don't spread wild animal suffering to the stars, etc. So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.

On 2, I also don't want to spend much more time on longtermist philosphy since I am already so convinced of longtermism that I expect another critique like all the ones we have already had won't move me much. And I agree better-futures style work (especially empirically groudned work) seems more promising.

cb @ 2025-10-20T19:10 (+5)

Thanks for commenting!

> So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.


Yeah, this seems reasonable to me. Max Nadeau also pointed out something similar to me (longtermism is clearly not a crux for supporting GCR work, but also clearly important for how e.g. OP relatively prioritises x risk reduction work vs mere GCR reduction work). I should have been clearer that I agree "not necessary for xrisk" doesn't mean "not relevant", and I'm more intending to answer "no" to your (2) than "no" to your (1). 

(We might still relatively disagree over your (1) and what your (2) should entail —for example, I'd guess I'm a bit more worried about predicting the effects of our actions than you, and more pessimistic about "general abstract thinking from a longtermist POV" than you are.)

Arepo @ 2025-10-21T02:14 (+4)

You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die

 

I think this is misleading, especially if you agree with the classic notion of x-risk as excluding events from which recovery is possible. My distribution of credence over event fatality rates is heavily left-skewed, so I would expect far more deaths under the curve between 10% and 99% fatality than between 99% and 100%, and probably more area to the left even under a substantially more even partition of outcomes. 

Jack_S🔸 @ 2025-10-21T08:13 (+3)

"Longtermism isn't necessary to think that x-risk (of at least some varieties) is a top priority problem."

I don't think it's a niche viewpoint in EA to think that, mainly because of farmed and wild animal suffering, the short term future is net-negative in expectation, but the long-term future could be incredibly good. This means that some variety of longtermism is essential in order to not embrace x-risk in our lifetimes as desirable.