A Critical Analysis of "The Case for Strong Longtermism"

By Zeren @ 2025-07-26T11:50 (+2)

There is a shorter version as a quick take post if you would like to go straight into the core part of my critique.

Abstract

This essay evaluates the argument for Axiological Strong Longtermism (ASL) developed in the working paper “The Case for Strong Longtermism” by Greaves and MacAskill. While the authors argue that we can significantly improve the far future in expectation, I question the strength of this claim. I examine concerns about tractability, drawing on Judea Pearl's causal graph theory to highlight the limits of long-range interventions. I then argue that the ASL argument only holds under the assumption of existence of non-extinction persistent states, the plausibility of which I critically assess. Finally, I raise concerns about the reliability of expected value models in reflexive systems, where agents respond to the models themselves, and offer plausible scenarios to illustrate this issue. I conclude that while the long-term future is morally important, the current case for ASL does not rest on stable enough grounds to justify the sweeping conclusions drawn by its authors.
 

1. Introduction

The Case for Strong Longtermism (CSL), a working paper by Hilary Greaves and William MacAskill (Greaves & MacAskill, 2021), presents a bold argument in moral philosophy, saying that far-future effects are the most important determinant of the value of our options. The authors defend a position they call Axiological Strong Longtermism (ASL), which relies heavily on expected value reasoning to argue that longtermist interventions should dominate our moral priorities. 

In this essay, I focus on their axiological claim, leaving deontic claims aside, as axiomatic one is the fundamental one on which deontic claim is built on. I first try to summarize their ASL thesis, then critically assess the assumptions and reasoning behind this view. 

While I acknowledge the appeal of considering far future consequences, I argue that key epistemic weaknesses particularly regarding reliability of the models used in expected value calculations, undermine the strength of their argument. My evaluation focuses first on stating the premises of the ASL argument, one of which I have objections for. I start by adding another tractability objection, then I challenge the plausibility of certain types of persistent states. Finally, I draw attention to the reliability issues in models of the world used as inputs into expected value calculations, focusing specifically on the context of the ASL thesis. I do this by highlighting the social implications of deprioritizing near-term improvement goals. 

For clarity, to address the ASL thesis on its own grounds, I will not engage in a critique of utilitarianism or expected utility theory. Instead, I will focus on the arguments presented in The Case for Strong Longtermism (CSL).

I will refer to expected value at times, as how CSL paper refers, as ex ante and unless clarified otherwise I will use it in the meaning from the definition in the paper,  the probability-weighted average of the possible values it might result in.

 

2. Summary 

In The Case for Strong Longtermism, Hillary Greaves and William MacAskill argue that in the most significant decision situations (mainly, how to spend money as a society and as an individual and how to choose a career as an individual), every option that is near-best overall is near-best for the far future, and every option that is near-best overall delivers much larger benefits in the far future than in the near future. They call this thesis Axiological Strong Longtermism (ASL). Here, time separability is assumed and the following definitions are provided for each part: “The far future” means everything from some time t onwards, where t is a surprisingly long time from the point of decision. “The near future” means the time from the point of decision until time t. 

They claim ASL holds not only in expected total utilitarianism but also in a wide class of outcome-based axiologies. Intermediate claims laid out, are meant to support at least expected total utilitarianism, yet, majority are meant also to support different variations of utilitarian axiologies.

Since it is not required for their central claims, the authors do not go into specifics about how the world should be modeled as input to the expected value formula and they do not provide details about the formula itself, aside from stating that they use the standard Bayesian approach, where expected value is determined by subjective probability, also referred to as credence.

Analogous to the axiological version, they argue for Deontic Strong Longtermism, DSL, which states that in these most significant decision situations, what we should do is to seize the opportunities to affect the far future. 

We can summarize the core premises leading to ASL as follows:

(a)  The projected count of future lives in our civilization is vast — a number many times greater than the sum of near-future lives.

(b)  We can significantly improve the far future, in expectation.
 

Vast size of future populations

The authors argue that humanity's far future could contain an extremely large number of lives. They say that if we don’t go extinct soon, we might survive for billions of years, either on earth or in other planets, including those in other star systems. Future technology could let us support much larger populations, and if digital minds become possible, the number of conscious beings could increase even more. Even if we think these futures are unlikely, the extremely large number of possible lives makes the expected count of future lives very high.
 

Persistent states

They acknowledge the natural doubts about our abilities to affect(1) the far future and directly address these worries. Central to the authors’ claim of our ability to affect the far future, is the idea that there are persistent states, a subset of all fine-grained states the world is in in a moment, which remain unchanged for an extremely long time, once instantiated. 

Extinction is a perfect example of a persistent state. If humanity goes extinct, we will not come back. Two widely recognized and more easily quantified threats of extinction are from asteroids and pandemics. It is estimated that spending $1.2 billion to find all the remaining asteroids larger than 10 kilometers could reduce the chance of human extinction in the next 100 years by about one in 300 billion (Newberry, 2021). Global pandemics, another possibility that may lead to extinction, was estimated to be in the range from 1 in 600,000 to 1 in 50 within the following 100 years, prior to the global COVID-19 pandemic (Millet and Snyder-Beattie, 2017). There are on-going efforts to mitigate these risks, however considering the risk of extinction, authors claim, there is still significant opportunity for doing more.

Another example is what artificial superintelligence (ASI) may bring about. Once our civilization has developed ASI, it is plausible to say that this may lead to emergence of persistent structures of power. On one hand, this may result in undesirable forms of hostile AI-takeover or world domination by one group, on the other, with informed decisions prior to and in the development of ASI, it may have positive outcomes that would persist.

Greaves and MacAskill admit, the research in this area lacks quantitative evidence to guide cost-effectiveness estimates, instead, they reference a survey of leading AI researchers which puts the probability of existential catastrophe from ASI at 1-10%.

While arguing for our ability to affect the far future, by providing examples of plausible persistent states the world might end up locked in, they suppose the world is not yet in any of such states and might settle into one in the foreseeable future. 

In addition to direct actions like reducing extinction and ASI related risks, the authors also present what they call meta-options, the strategies that aim to increase our chances to positively influence the far future. One such strategy is investing in research to better understand which interventions may be more cost-effective for improving the far future. Even if we are currently unsure which actions are best, they suppose it is highly likely that further investigation will help us identify high-impact options. Funding this kind of research today could lead to much better decision-making in the future. Another meta-option is to save resources now and use them later, once better opportunities arise.
 

Epistemic worries

The authors then discuss worries which are more directly epistemic (under the section Cluelessness, likely a reference to Greaves’ 2016 paper of same title). 

Firstly, under the section simple cluelessness, they dismiss the worry that success is no more likely than counterproductivity, saying it would be overly pessimistic to argue for it.

Second worry they address is the one they call conscious unawarenessknowing that one is unaware of many relevant considerations while making a decision. They list three that might be left out of the model: 1) fine-grainings,  2) possible states of nature and 3) better options than options included. For the first two, they argue, lack of such information can be accounted for in the Bayesian framework they use and even presence of such information might change expected values, the ex ante rationality of the framework itself is not undermined. For the last one is dismissed as their argument requires only a lower bound on attainable far-future expected benefits.

In addition, the authors claim even with the use of imprecise and subjective probabilities, expected impact of interventions such as those to prevent AI risk will come on top in comparison to those for the near future. They claim any plausible degree of imprecision or arbitrariness can not undermine strong longtermism - considering the very large scale of the benefits at stake.

In response to ambiguity aversion, Greaves and MacAskill argue that it is more appropriate for an altruistic agent making the decision to be ambiguity averse with respect to the state of the world, rather than with respect to the difference one makes oneself to that state and that we can not clearly say that actions seeking to improve the far future increase ambiguity with respect to the state of the world

Greaves and MacAskill argue that lacking precise probabilities does not justify ignoring the far future. Even imprecise credences can guide action when the potential value is vast. They suggest that dismissing longtermism due to ambiguity would itself be an arbitrary and overly risk-averse stance.

Addressing the fanaticism objection which claims we are trusting tiny probabilities of huge outcomes, they respond by saying that the real longtermist actions they talk about often involve non-tiny risks, such as 1% or more. So these are not fanatical in the usual sense. They also warn that if we stop using expected value because of fanaticism, we might end up being too cautious, even when huge stakes are involved. In the end, they think expected value is still the best method we have. However the authors agree it is a serious concern and point at the direction of non-fanatical decision theories for further research.
 

Robustness

The authors argue that the case for axiological strong longtermism does not depend only on total utilitarianism but also under various other moral views.

  1. If you are risk-averse, you might care more about reducing existential risk because preventing large harms becomes even more important.
  2. If you hold moral views that prioritize the worst-off (prioritarianism), then preventing future collapse or suffering (e.g. dystopian futures) may carry greater moral weight.
  3. Even if you are arguing for person-affecting approaches to population ethics which tend to resist a totalistic view, thus your attitude towards persistent state of extinction is neutral, when you consider the risks from ASI, affecting how good or bad those future lives will matter. 

Overall, they argue that as long as the moral theory allows future people to matter and lets us compare large-scale outcomes, ASL remains valid. 

 

3. Evaluation  

As mentioned in the Introduction, this essay focuses on Axiological Strong Longtermism (ASL). 

Here again, the two claims I have identified as the core premises of ASL:

(a)  The projected count of future lives in our civilization is vast. 

(b)  We can significantly improve the far future, in expectation.
 

There is no disagreement I want to put forward for the first one (a). It appears plausible that the probability of having a vast number of lives in the future is significant, following from the paper’s arguments for it which I have summarized above.

In what follows, I examine the second premise (b) and argue that it may not be as well-supported as the authors suggest. Defensive arguments for the second premise (b) are built on three sides: tractability, cluelessness, and fanaticism, where the authors identify as directions potential counterarguments to their claims can come from.

In the first part of my critical assessment, I group the worries about the models used for expected value calculations that are addressed in the paper, into two categories:

(I) Worries due to the nature of long causal chains:

These would be the doubts about tractability.

(II) Worries from epistemic gap:

These would be imprecision and arbitrariness of probabilities, conscious unawareness and simple cluelessness.

In addition to the criticism of tractability, included in CSL, I would like to refer to Judea Pearl’s causal graph theory. Pearl’s reasoning about interventions requires well-specified causal models, often in the form of causal graphs. Even if we assume there are no confounding variables, long causal chains still create serious problems for expected value reasoning. Based on Pearl’s causal graph theory, when an action affects the far future through many steps, the influence becomes weaker and harder to track. At each step, random factors or unknown influences can change what happens next, and the effect of the original action can get diluted. These variables are not confounders, since they don’t influence both the beginning and end points directly, but they are often unobserved or not well understood. This means that even if we define a probability like P(outcome | do(action)), it may not be stable or reliable unless we know the full structure of how the action leads to the outcome. In real-world longtermist cases, we usually don’t have this kind of detailed knowledge.

This also shows tractability worries are not independent from those listed under epistemic gap.

However, the defensive argument of authors for tractability, based on persistent states still holds. The interventions, for example for mitigating artificial superintelligence (ASI) risks, need not aim at impacts in the far future, in practice, they are in fact analogous to the near future interventions. Once we are able to impact the persistent state which the world settles into in the near future, this effect will persist for a very long time. 

At this stage, I want to point out that the robustness argument holds only for non-extinction persistent states for which only one concrete example, positively shaping the development of artificial superintelligence (ASI), is given. Non-totalitarian views in population ethics such as negative utilitarianism would argue against increasing expected value by simply adding more future people. 

This leaves the core argument of ASL with its claim of robustness, resting on a rather narrow support, namely the persistent states that may be instantiated by ASI and the possibility of identifying other non-extinction persistent states. Also, recall, the probability of existential catastrophe from ASI is based on surveys of AI experts. 

Nevertheless, if we take a similar approach with the authors, we can say the vast size of the future still would result in very large ex ante benefits, therefore, ASL’s core argument still holds for such persistent states they claim exists in the real world. 

In their definition of persistent states, the authors state that the expected time for which the world remains in such states is extremely long. Having established the existence of non-extinction persistent states as crucial to the robust ASL argument, I now aim to challenge the likelihood of the purported durations of these states.

I claim, humanity living under apprehensive conditions and in a persistently stagnant state without any meaningful progress, for a prolonged amount of time, will either 1) eventually collapse and go extinct in a duration shorter than what we can call extremely long or 2) through adaptability recalibrate its perceptions of well-being so their states can no longer be considered to have low utility. Empirical evidence (e.g., Diener et al., 1999; Easterlin, 2001; Kahneman & Deaton, 2010) demonstrates humans adapt to their circumstances, meaning future generations may not perceive their conditions as dreadful as contemporary observers do.

In the second part of my critical assessment, I step back to highlight a further issue with the models used for expected value calculations, in the context of Axiological Strong Longtermism (ASL). Greaves and MacAskill suggest that claiming counterproductivity is as likely as success, would reflect implausible pessimism. However, this dismissal seems too quick. Given the reflexive instability of predictive models in real-world settings, especially when agents respond to the model itself, the risk of counterproductive outcomes appears plausible. Even a simple causal model may become unstable under reflexive conditions when agents capable of influencing the modeled world become aware of the model and adjust their behavior in response. This issue is well-documented in economics, particularly in the Lucas critique, which shows that causal models may become unreliable once policy changes alter agent behavior (Lucas, 1976). Models used for expected value calculations are not exempt from this feedback loop, which undermines their predictive reliability. 

A concrete example that illustrates this modeling issue arises from the exploitability of the Axiological Strong Longtermism, as developed in the CSL paper. They argue for focusing on improving the far future, without prioritizing any of the specific causes mentioned in the paper. The vagueness inherent in many longtermist intervention areas, combined with the acceptance of imprecise probabilities, creates significant room for exploitation. History offers numerous examples of misuse of authority using appeals to the greater good, as a tool for manipulation and control. This raises a concern about the potential misuse of such reasoning.

Another concrete scenario, which may unfold either independently or, more likely, in connection with the first, concerns the consequences of abandoning or severely deprioritizing near future goals. If present-day human cooperation across cultures and borders loses its ethical foundation, individuals may increasingly shift their concern toward smaller groups with closer ties. Instead of fostering solidarity with the broader human community, this could accelerate social fragmentation and reduce global coordination, further undermining the longtermist aims.
 

4. Conclusion

In this essay, I have discussed the argument for Axiological Strong Longtermism (ASL) as developed in the CSL paper. While the authors make a compelling case that the far future holds enormous potential value, I have argued that the second core premise stating that we can significantly improve the far future in expectation, relies on fragile foundations. Objections from tractability issues over long periods of time, at first appear defendable through Greaves and MacAskill's claims about persistent states, however I have suggested such states appear more speculative and less durable upon closer scrutiny. Finally, I have tried to show models used for expected value calculations can become unstable as agents that are part of the modeled world act in awareness of the model. 

The moral importance of future generations should not be dismissed, and in the CSL paper, Greaves and MacAskill rightly highlight the neglectedness of long-term goals. However, the current case for Axiological Strong Longtermism lacks a sufficiently stable and reliable foundation to support the sweeping conclusions the authors draw. Importantly, there may be considerable overlap between near-term and long-term goals, such as strengthening institutions and democratic structures, which deserves greater attention. Striking a more deliberate balance between long-term and near-term aims may provide a more grounded path forward. 

 

References

 



 


Zeren @ 2025-08-21T05:42 (+1)

Since I wrote this essay, I have come to realize my critique that is based on the reflexive instability of predictive models may in fact be considered a general criticism directed at utilitarianism, therefore violates the promise that I argue against the CSL paper’s claims on its own terms, accepting utilitarianism. In addition, my criticism on the plausibility of extreme durations of non-extinction  persistent states may not matter much as well. Even less plausible scenarios may be of significance due to the high utility (in expectation) that’s at stake. 

Currently, it appears difficult to criticize CSL’s claims without directing one’s criticism at utilitarianism itself.