Overview of Longtermism (Its ideas, writings, individuals, institutions, and history)

By James Brobin @ 2026-04-07T17:56 (+7)

In this document, I compile a wide range of information about longtermism including its:

In this appendix I also include:

This document is about longtermism, defined as the view that we have a moral obligation to work to shape the far future in a positive way.

I do not include much information related to the intersection of longtermism and AI. I also have not included any information related to suffering risks, which is adjacent to longtermism and addresses similar questions. I also have not read everything there is to read about longtermism so this document is necessarily incomplete.

Institutions

Longtermism has/had three major institutions:

FHI and GPI were both institutions at the University of Oxford. FHI was founded by Nick Bostrom and did extensive work related to defining and then presenting the field of existential risk. GPI was founded by effective altruists as a way to give more credibility to and to engage in more quality global priorities research. GPI led the way on work that is particularly longtermist in flavor.

Forethought Foundation for Global Priorities was founded by effective altruists with the mission to do global priorities research with a focus on longtermism. It temporarily shut down between 2024 and 2025. When it re-opened, it renamed itself Forethought, and it changed its mission to focus on humanity’s transition to a post-AGI world. It has many individuals who worked at either GPI or FHI, and it is the only remaining institution that releases work related to longtermism. It is unclear whether “Forethought Foundation for Global Priorities” and "Forethought" are technically the same organization or if the latter just happens to be a rough continuity of the former.

Individuals

Longtermism has roughly seven major figures that have made multiple significant contributions to longtermism. These individuals include:

I’m not entirely sure where each of these individuals worked. If someone is not included in this list, I’m not trying to imply that they should not be included on it.

Writings

In this section, I include major works, important works, books, resources, and research agendas related to longtermism. It’s quite subjective what writings are the most important so I don’t mean to indicate that a work was unimportant if it was not included.

Major Works

Astronomical Waste” by Nick Bostrom (2003)

Existential Risk Prevention as Global Priority” by Nick Bostrom (2013)

On The Overwhelming Importance of Shaping The Far Future” by Nick Beckstead (2013)

The Precipice by Toby Ord (2020)

The Most Important Century” by Holden Karnofsky (2021)

What We Owe The Future by William MacAskill (2022)

The Case For Strong Longtermism” by Hilary Greaves and William MacAskill (2019, 2021, 2025)

Better Futures” by William MacAskill (2025)

Other Important Works

How many lives does the future hold?” by Toby Newberry (2021)

Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox by Stuart Armstrong and Anders Sandberg (2013)

AGI and Lock-In” by Lukas Finnveden, Jess Riedel, and Carl Shulman (2022, 2025)

Viatopia” by William MacAskill (2026)

Books

The Precipice by Toby Ord (2020)

The Long View: Essays on Policy, Philanthropy, and the Long-term Future by Natalie Cargill and Tyler M. John (eds.) (2021)

What We Owe The Future by William MacAskill (2022)

Essays on Longtermism: Present Action for the Distant Future by Hilary Greaves, Jacob Barrett, David Thorstad (2025)

Overviews (Taken from William MacAskill’s website):

The website Longtermism

Longtermism” by Wikipedia

Longtermism” by Forethought

Longtermism: a call to protect future generations” by 80,000 Hours

Research agendas:

History of Longtermism:

This section is a brief history of longtermism based on its academic writings. As such, it is necessarily going to miss some relevant details. A much better history could be given by someone who has been working in the field for a long time, but I have not seen any such histories written.

Longtermism, as far as I can tell, can be traced back to the development of the field of nuclear ethics after the invention of the nuclear bomb in the 1940s. Nuclear ethics, concerned about the power of these weapons, argued that one serious concern about nuclear weapons was that they could pose an existential risk to humanity. Then, in 1996, John Leslie wrote the book The End of the World, in which he expanded thinking on existential risk beyond merely that of nuclear weapons. This book went on to inspire Nick Bostrom to be concerned about existential risk and to form the basis of the field as we know it today.

In my view, longtermism has roughly three waves, all of which roughly overlap.

The first wave was defined by early work related to existential risk and to further defining it as a field. This work was done mostly by Nick Bostrom and his organization FHI, and its time period was roughly from 2003 to 2013. Two representative works are Bostrom’s “Astronomical Waste” and “Existential Risk Prevention as Global Priority.”

The second wave was defined by more developed thinking around existential risk as well as a greater focus on the ethics of caring about future generations. This work was done by both FHI and GPI, and its time period was from roughly 2013 to 2024. Two representative works are “On The Overwhelming Importance of Shaping The Far Future” by Nick Beckstead (2013) and The Precipice by Toby Ord (2020)

The third (and most recent) wave is defined by an increased focus on how to positively shape the far future outside of existential risk reduction. It has a particularly strong focus on AGI. This work can be thought of as starting in 2022 with the release of Forethought’s “AGI and Lock-In.” This work was mostly done by Forethought although it was also contributed to by other organizations. Two representative works are What We Owe The Future by William MacAskill (2022) and “Better Futures” by William MacAskill (2025).

Core Argument

The core argument for longtermism is as follows:

  1. Future people matter just as much as we do.
  2. Humanity’s future could be vast in duration.
  3. We can help these people.

This philosophy usually assumes a Bayesian view of the world and makes these arguments on the basis of expected value.

Related Concepts

The following is a list of concepts related to longtermism. It is not meant to be complete and is mostly based on ideas from What We Owe The Future:

Cause Areas

Longtermists have generated many cause areas for how to positively influence the far future. This list includes many of these ideas (but they are primarily from William MacAskill and Toby Ord):

  1. Ensuring humanity’s survival
    1. Reducing extinction risks
      1. Biorisks
        1. Engineered pandemics
        2. “Natural” pandemics
        3. Mirror bacteria
      2. Great power war
        1. World peace
        2. Nuclear weapons
      3. AI
        1. AI safety
        2. Preventing gradual disempowerment
      4. Climate change
      5. Environmental damage
      6. Dystopic scenarios
    2. Reducing the risk of irreversible collapse
      1. Climate change
      2. Fossil fuel depletion
    3. Reducing the risk of stagnation
      1. Increasing population growth
      2. Promoting technological progress
  2. Trajectory changes
    1. Keeping our options open
      1. “Preventing post-AGI autocracy"
      2. “Space governance”
      3. Explicitly temporary commitments
      4. Working towards viatopia/“the long reflection” (Not mentioned in linked article)
    2. Steering out trajectory
      1. AI governance
      2. “AI value-alignment”
      3. Rights of digital beings
      4. “Space governance”
      5. “Collective decision-making”
      6. “Preventing sub-extinction catastrophes
      7. Spreading positive values (Not mentioned in linked article)
      8. Reducing suffering risks (Not mentioned in linked article)
    3. Other/Both
      1. “Deliberative AI”
      2. “Empower responsible actors”
  3. Other
    1. Further research into longtermism
    2. Movement building for longtermism

For more examples of cause areas, see the EA Forum Wiki. Also, check out the post I wrote categorizing them in a different way.

Common Counter Arguments

Some common counter arguments include:

  1. We have less of a moral obligation to future humans than humans who are alive today.
  2. We cannot meaningfully predict how the future will go.
  3. Longtermism relies on small probabilities of extremely large outcomes.
  4. We should expect our actions’ effects to “wash out” over sufficiently long time periods.

Appendix

Summaries of Some Major Works

Major Works:

Astronomical Waste” by Nick Bostrom (2003)

What We Owe The Future by William MacAskill (2022)

The Case For Strong Longtermism” by Hilary Greaves and William MacAskill (2019, 2021, 2025)

Better Futures” by William MacAskill (2025)

Other Important Works:

How many lives does the future hold?” by Toby Newberry (2021)

AGI and Lock-In” by Lukas Finnveden, Jess Riedel, and Carl Shulman (2022, 2025)