Are longtermist ideas getting harder to find?
By OscarDđ¸ @ 2025-10-18T23:43 (+46)
Imagine you are a junior advisor to the boss of some major EA longtermism-sympathetic org (OpenPhil, CEA, 80K, etc). You are tasked with reading the Essays on Longtermism compilation, and collating any novel insights that could significantly change what we should be doing. That is, we want essays that make âbig, if trueâ claims, and present interesting arguments for them. To be clear, this is a high bar! Novelty is hard, and action-relevance is even harder for orgs that have already thought a lot about these issues.
I claim that there is a surprising dearth of such insights in the Essays (Section 1 and Appendix). I then consider why this might be the case (Section 2), and what this could mean for global priorities research generally (Section 3).
1. Where are all the new, important insights?
Disclaimers first:
- I found several of the essays interesting, and by more normal measures of academic success, I expect many of them to rate very well.
- I read some of every essay, and all of some essays, but not the book cover to cover.
- I donât want to be mean to the authors or editors; I think doing this book could well have looked great ex ante, and even though I am a bit disappointed ex post, maybe that is just me.
I claim that the essays generally fell into one of several buckets:
- Great ideas rehashed: Some essays, e.g. The Case for Strong Longtermism, are updated versions of previously published ideas, and it makes sense to include them here, but they arenât novel.
- Different worldviews: Some essays made sufficiently different empirical or moral assumptions to me (and to what I think of as the standard longtermist package, of being somewhat consequentialist, not too risk-averse, expecting radical AI progress this century, expecting non-biological moral patients are possible, expecting space colonisation to be key, etc) that their conclusions donât feel that relevant.
- Academic curiosities: Some authors seem to share my standard longtermist package views, but do not concern themselves much with practicalities, or they try (but in my opinion fail) to make themselves more practically relevant. I have a permissive bar here, where even if something isnât directly actionable, but just seems important enough that it should receive more dedicated research with a view to getting actionable insights, that would be sufficient.
The first category requires no further explanation, but the latter two are subjective judgement calls, and I would be interested to hear where readers (or the original essay authors, if they see this!) disagree. Since there are many essays, even though I only critically review each briefly, I have put this in the Appendix.
To check I am not just marking inordinately harshly, here are some insights that I think would meet my bar of being novel and action-guiding:
- The original idea of longtermism. It seems so obvious in retrospect, but the idea never occurred to me until I started reading Beckstead and MacAskill and so forth.
- The idea of existential risk, especially thinking about why advanced AI poses such a risk.
- Early work on the possibility and vast importance of digital minds.
- Patient philanthropy, the hinge of history, and the most important century.
- Acausal trade, evidential decision theory, large worlds, etc. Unclear how action-guiding it is, but these seem like crucial ideas that I am glad some people are now thinking about.
- Cluelessness, âunawarenessâ, bracketing, etc.
- S-risks.
- The vulnerable world hypothesis.
- Space colonisation dynamics and governance.
- Trajectory changes, eutopia, moral progress, and creating Better Futures.
- The time of perils debates.
- Maybe moral trade.
- Maybe XPT and some forecaster-y things.
- [What other ideas do you think should make this list?]
For anyone who disagrees with me, and thinks that some of the essays should be sent to Alexander Berger et al. to inform EA longtermist strategy, the rest of my post wonât be very relevant. Next, I discuss why there might be a lack of actionable, novel ideas in the book.
2. Hypotheses
Here are some possible explanations:
- Books like this are not meant to come up with novel, actionable insights. Instead, they are designed to build academic credibility for a new field of longtermist inquiry, and make it more acceptable for young scholars to build mainstream academic careers focusing on longtermist questions.
- This seems like the best explanation to me.
- Academics are stodgy and slow and boring, and the interesting new ideas will be published quickly in blog/forum posts and working papers.
- This also seems like a big part of what is going on. Indeed, lots of the best essays in the volume are publishing ideas that were first developed as e.g. arXiv papers and EA Forum posts.
- New ideas are getting harder to find. Crucial considerations in particular are hard to come by, and early EA thinking (and relatedly FHIâs work) plucked all the low-hanging fruit.
- This is the big-if-true hypothesis. Given this is only one collection of essays, which may not even be aiming at generating important, novel ideas, this gives us only a fairly small piece of evidence for this hypothesis. To assess this properly we would need to analyse a larger tranche of writings over time and in different venues, and whether earlier pieces are generally thought to be bigger breakthroughs. Something like this analysis of the disruptiveness of scientific breakthroughs over time, but for longtermist philosophy.
3. Recommendations
If the first two hypotheses â about such books not being likely to produce novel, useful, longtermist insights â are true, this would be a reason not to want to fund future book projects like this one, but little else would need to change. However, if the third hypothesis is significantly true, namely that longtermist ideas are getting harder to find, this may have bigger implications:
- Fund less longtermist philosophy research, since we are hitting more quickly diminishing returns.
- This seems directionally correct, but may be too hasty.
- Fund longtermist philosophy researchers who share our basic assumptions and want to produce action-guiding work, or who we give a tightly scoped question to, where we already know that question is important and actionable.
- This seems right to me; I am not very optimistic about funding random philosophers to work on longtermism. For more practical work, hiring researchers who donât share our basic worldview is sensible, but we will need to carefully define the scope of research.
- Donât kill the golden goose. Even if all-time-great contributions are becoming rarer, we donât know where the next one will come from, and we donât want to force people to do more practical, goal-directed research lest it stifle creativity.
- Until and unless we have stronger evidence longtermist insights are drying up, this seems right. And even if at some future date we have not come up with a new crucial consideration for decades and fundamental longtermist philosophy research seems unproductive, the importance of considering new crucial considerations may be such that funding some of this work should remain part of a longtermist portfolio for a long time indeed.
- Global priorities researchers should think carefully about what projects need to be done before we have AIs that are better than humans at philosophy and economics, and what can be left until after.
- This is only tangentially related, but especially as transformative AI seems closer, more prioritisation of research topics should focus on what AIs wonât be able to automate for longer, or what we need to get done before handing over more research to AIs. For instance, infinite ethics seems like something we can mostly punt to future AI philosophers, while the ethics of creating, modifying, and deleting potential digital minds could be more urgent.
Summing up, my main conclusion is a negative one: I wouldnât recommend these essays as key reading for longtermist decision-makers. But beyond this, we have some quite weak evidence about what sort of longtermist research to do and fund going forward.[1]
Appendix: critical reviews of the essays
Against the typical advice of focusing narrowly and deeply, I will (briefly) discuss each of the essays. I tend to give longer reviews either to chapters I thought were more promising, or where I have a more specific and substantive critique.
- Introduction
- NA
- The Case for Strong Longtermism
- Bucket 1.
- Longtermism and Neutrality about More Lives
- Bucket 2: I find the assumption of neutrality quite implausible, and so donât have much to gain from this essay.
- Prudential Longtermism
- Bucket 2: I care about ethical longtermism, not prudential longtermism. So while life-extension would probably be great, it doesnât fundamentally change the longtermist picture for me.
- Would a World Without Us Be Worse? Clues from Population Axiology
- Bucket 3: I admit I somewhat bounced off this chapter â I think something at least close to totalism is likely correct (or, more modestly, that creating many happy lives seems of overwhelming importance). To be action-guiding, e.g. to reject longtermism or X-risk reduction efforts, we would need to be very confident in totalism being false, I think. So I didnât gain much from this essay, but I liked its systematicity and rigour, and I wouldnât be shocked if future philosophers look back on this as an important contribution. So it gets an honourable mention.
- Longtermism in an Infinite World
- Bucket 3: Borderline, I thought this was a good essay, but overall think that infinite ethics issues can mostly be punted to the future. Perhaps the initial work on infinite ethics is sufficiently novel to outweigh weak action-relevance, but this chapter seemed quite incremental.
- Longtermism and the Complaints of Future People
- Bucket 2: I have very little sympathy for anti-aggregationist ethics.
- Against a Moral Duty to Make the Future Go Best
- Bucket 2: Similarly, beneficence towards future people is a core part of my worldview, and I donât find objections to it compelling.
- Authenticity, Meaning, and Alienation: Reasons to Care Less about Far-Future People
- Bucket 2: I care about actually doing good, not (just) about feeling an authentic warm glow.
- What Are the Prospects of Forecasting the Far Future?
- Bucket 3: I agree with this chapter that the empirical evidence is mostly silent on our ability to make very long-run forecasts. But this is to be expected, and shouldnât prevent us from trying to act to influence the far future.
- Taking the Long View: Paleobiological Perspectives on Longtermism
- Bucket 3: Again, I agree that if humans go extinct, civilisation-building life may never re-evolve on Earth. But I donât find this paleobiological perspective particularly informative for present-day prioritisation.
- Coping with Myopia
- Bucket 2: This essay is squarely focused on the future of biological humans and nonhumans under climate change. I did not find its insights particularly relevant for space-based digital mind civilisations.
- Shaping Humanity's Longterm Trajectory
- Bucket 3: I thought this chapter had more merit, and generally I like simple mathematical models. But the connection with real-world decisions felt lacking to me. I didnât come away with much idea what we should do differently. And maybe this wasnât the point; instead the essay just sharpens our conceptual toolkit. So this is worth an honourable mention.
- Longtermism and Cultural Evolution
- Bucket 2: I felt this essay didnât engage enough with the possibility of radical changes â such as digital minds, AI constitutions, and space colonisation â for insights from ânormalâ cultural evolution to be that relevant on my basic longtermist picture.
- The Hinge of History and the Choice between Patient and Urgent Longtermism
- Bucket 3: I thought this essay came closest to meeting my bar for importance and novelty, definitely an honourable mention. If the hinge of history is not now, plausibly we should be investing a far larger fraction of our resources (patient philanthropy). I agree with HäggstrĂśm that we should not put great weight on anthropic arguments, given the nascent state of the field. I also agree that the rapid economic and technological growth of the current era may be enough to outweigh even a very strong prior against any particular agent finding themselves at the HoH. So while a useful contribution, I think the essay neither has the originality of e.g. Karnofsky and MacAskillâs past work, nor the practicality and methodological rigour of Trammelâs treatment of patient philanthropy.
- How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role
- Bucket 1.
- Longtermist Myopia
- Bucket 3: Some interesting ideas, but it doesnât really challenge or refine core longtermist views. I agree that in practice, actors with zero pure time discount rates should sometimes act according to a positive discount rate, but this didnât feel either novel or particularly action-guiding.
- Minimal and Expansive Longtermism
- Bucket 3: Interesting paper, honourable mention. Taking the dimensions on which we may be more or less expansive in order:
- I am expansive when it comes to the scope of interventions longtermists should consider, rather than just focusing on X-risks. I think Better Futures makes a strong case for this, and this section was good but not especially novel.
It is true that almost all decisions (the authors mention e.g. âwhat to have for breakfastâ) donât matter nontrivially for the longterm, and so on this dimension I think I am a minimalist. But I donât think this matters much? The important non-personal decisions,[2] where we should consider the longterm future, are what matters. Perhaps I misunderstood the authorsâ motivation here.
- Similarly, whether we ought, ideally, to spend 2% or 50% of world GDP on longtermist causes seems of little practical importance to me. If, in the unlikely event we are ever spending 2% of GDP on longtermist interventions, we can reassess.
- So I broadly agree with the essay, but donât think it meets the novelty-actionability bar.
- Bucket 3: Interesting paper, honourable mention. Taking the dimensions on which we may be more or less expansive in order:
- What Would a Longtermist Society Look Like?
- Bucket 3: I think there is not much to be learned from such extremisation analyses where we imagine what would happen if everyone was a longtermist and use this to learn something about what to do when only a tiny fraction of people are.
- Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models
- Bucket 2: This did not engage sufficiently with ideas of digital minds, space colonisation, and cultural or biological evolution towards high-fertility memes, for my liking. I place very little confidence in such standard population models when considering the very far future, and therefore expect that extinction actually is special in terms of having a lasting impact.
- Depopulation and Longtermism
- Bucket 2: I agree with the authors about roughly everything, and think that if advanced AI were not coming this century, depopulation would be among the most important global problems. And if (which seems very unlikely) AI progress stalls for decades without a broader societal collapse, we should reconsider, and perhaps prioritise boosting fertility rates as a cause area. But for now, this seems quite far down the priority list for longtermists, and there will be time later to try to fix things if needed (of course, it would be better to start now, but we have limited attention and resources).
- Existential Risk from Power-Seeking AI
- Bucket 1.
- Deceit and Power: Machine Learning and Misalignment
- Bucket 1.
- The Ethics, Economics, and Demographics of Delaying Aging
- Bucket 2: On âmedium-termistâ grounds, and ignoring AI, anti-ageing may indeed be a great thing to work on. But if we are longtermists and expect that the average future person will be living in a technologically mature civilisation, accelerating the time at which ageing is solved on our tech tree is good, but very marginal in the scheme of things. And if you take AI advancement seriously, this is a classic case of something that should be punted for aligned AIs to solve.
- Longtermism and Animals
Bucket 2: To some extent, I agree with this essay that animals should feature more prominently in longtermist thinking. But I think they are too quick to reject the argument from digital minds. In particular, they write:
It is also possible that the future will not be dominated by either humans or non-human animals but digital beingsâ sentient AIs. In the end, there is a lot of uncertainty here and unless we are quite sure of these alternative outcomes, we still have reason to believe that there will be very high numbers of animals in the future.
But this seems to get things backwards. In worlds where digital minds are common, there will be vastly more moral patients than in worlds with only biological humans and animals. So we would need to be exceedingly confident that digital minds will not be common to think that biological creatures will account for more than a tiny fraction of future moral patients. So while it is true that âthere will be very high numbers of animals in the futureâ (in expectation), the important quantity is the proportion of future minds who will be biological animals, and this seems to be very small in expectation. That said, these ideas probably warrant some more consideration, so this essay also gets an honourable mention.
- Longtermist Political Philosophy: An Agenda for Future Research
- Bucket 2: While it is interesting to think about other philosophical traditions, I am mainly interested in fairly consequentialist views where the most important goal is to maximise future welfare. This makes these other views prioritising e.g. justice as an intrinsic goal, less useful to me. Additionally, normative questions for individuals deciding where to work and donate are the most action-relevant, given we are in no position to structure society and politics along longtermist principles.
- Retrospective Accountability: A Mechanism for Representing Future Generations
- Bucket 3: I liked this essay, and thought the infinite chain of futures assemblies, each evaluating their predecessors, was a clever idea that I would like to see more work to make happen. So definitely an honourable mention. But I had two main concerns. Firstly, practically, I am worried that there will be intergenerational solidarity between futures assembly members, and an implicit norm will develop that you should assign rather high rewards to your predecessors, relying on your successors to do likewise. Game theoretically, this seems like the optimal strategy (using evidential decision theory, where the deliberations of the present assembly provide evidence about what future assemblies will think). Secondly, the chances of something like this actually being set up in the next few decades seem remote, so I am skeptical that much progress could be made or that this would be cost-effective to advocate for. But I would love to be proven wrong.
- Longtermism and Social Risk-Taking
Bucket 3: I donât have a fancy critique; the setup just didnât seem that relevant to real-world decisions to me. Moreover, I think in most decision situations normal (not risk-adjusted) expected value reasoning is sufficient.[3]
- The Short-Termism of 'Hard' Economics
- Bucket 3: I liked this chapter; honourable mention. And I expect reforming the academic econ profession in the direction they propose would be net beneficial. But I have little reason to believe, and they made no argument to support, that EA time or money should be spent trying to make this happen. In particular, the shorter your AI timelines are, the less cost-effective it is to try to reform economics, as institutional change is generally slow.
- The Intuitive Appeal of Legal Protection for Future Generations
- Bucket 3: An interesting enough paper, but not that relevant to most longtermist EAs. Counterfactually, even if all the law professors and everyday people surveyed said they found longtermism unintuitive and uncompelling, I donât think that would update me much at all.
- Temporal Distance Reduces Ingroup Favoritism
- Bucket 3: Fine, but pretty unremarkable incremental results. The authors investigated (among other things) whether longer time horizons make people more impartial in where they would like to donate. Possibly more interesting, I think, would be investigating the opposite direction of whether considering overseas charities makes people have longer time horizons. Still, fundamentally, I donât care that much about the moral intuitions of MTurkers.
- ^
Thanks to Catherine Brewer for helpful comments on my draft.
- ^
âNon-personalâ to exclude e.g. who to marry, which I think we arenât required to analyse through a longtermist lens.
- ^
While reading this chapter, I had an interesting idea: since the value of information is so high for a civilisation that expects to last a very long time and is deciding which hard-to-reverse policy to pursue, running detailed civilisation simulations might be attractive, to learn which policy works better in expectation.
cb @ 2025-10-19T15:05 (+10)
Glad you shared this!
Expanding a bit on a comment I left on the google doc version of this: I broadly agree with your conclusion (longtermist ideas are harder to find now than in ~2017), but I don't think this essay collection was a significant update towards that conclusion. As you mention as a hypothesis, my guess is that these essay collections mostly exist to legitimise discussing longtermism as part of serious academic research, rather than to disseminate important, plausible, and novel arguments. Coming up with an important, plausible, and novel argument which also meets the standards of academic publishing seems much harder than just making some publishable argument, so I didn't really change my views on whether longtermist ideas are getting harder to find because of this collection's relative lack of them. (With all the caveats you mentioned above, plus: I enjoyed many of the reprints, and think lots of incrementalist research can be very valuable âit's just not the topic you're discussing.)
I'm not sure how much we disagree, but I wanted to comment anyway, in case other people disagree with me and change my mind!
Relatedly, I think what I'll call the "fundamental ideas" â of longtermism, AI existential risk, etc â are mildly overrated relative to further arguments about the state of the world right now, which make these action-guiding. For example, I think longtermism is a useful label to attach to a moral view, but you need further claims about reasons not to worry about cluelessness in at least some cases, and also potentially some claims about hinginess, for it to be very action-relevant. A second example: the "second species" worry about AIXR is very obvious, and only relevant given that we're in a world where we're plausibly close to developing TAI soon and, imo, because current AI development is weird and poorly understood; evidence from the real world is a potential defeater for this analogy.
I think you're using "longtermist ideas" to also point at this category of work (fleshing out/adding the additional necessary arguments to big abstract ideas), but I do think there's a common interpretation where "we need more longtermist ideas" translates to "we need more philosophy types to sit around and think at very high levels of abstraction". Relative to this, I'm more into work that gets into the weeds a bit more.
OscarDđ¸ @ 2025-10-20T15:50 (+2)
Good point, yes I think empirical findings that have a large bearing on what longtermists should be doing would also count for me, and yes perhaps empirical work is still easier to come up with new important considerations in.
David_Moss @ 2025-10-19T10:33 (+10)
An alternative hypothesis is that less time is being devoted to these kinds of questions (see here and here).
This potentially has somewhat complex effects, i.e. it's not just that you get fewer novel insights with 100 hours spent thinking than 200 hours spent thinking, but that you get more novel insights from 100 hours spent thinking when doing so against a backdrop of lots of other people thinking and generating ideas in an active intellectual culture.
To be clear, I don't think this totally explains the observation. I also think that it's true, to some extent, that the lowest hanging fruit has been picked, and that this kind of volume probably isn't optimising for weird new ideas.
Perhaps related to the second point, I also think it may be the case that relatively more recent work in this area has been 'paradigmatic' rather than 'pre-paradigmatic' or 'crisis stage', which likely generates fewer exciting new insights.
Jakob Lohmar @ 2025-10-20T10:09 (+3)
That's an interesting take! I have a lots of thoughts on this (maybe I will add other comments later), but here is the most general one: One thing is to create new ideas, another thing is to assess their plausibility. You seem to focus a lot on the former -- most of your examples for valuable insights are new ideas rather than objections or critical appraisals. But testing and critically discussing ideas is valuable too. Without such work, there would be an overabundance of ideas without separation between the good and bad ones. I think the value of many essays in this volume stems from doing this kind of work. They address an already existing promising idea - longtermism - and assess its plausibility and importance.
OscarDđ¸ @ 2025-10-20T15:47 (+3)
That's a good point - responding to existing ideas does seem less exciting and original, but I agree is still valuable, and perhaps under-rewarded given it is less exciting.
Jakob Lohmar @ 2025-10-20T16:03 (+1)
and perhaps under-rewarded given it is less exciting.
...especially so in academia! I'd say that in philosophy mediocre new ideas are more publishable than good objections.