Zeren’s quick takes
By Zeren @ 2023-05-19T12:53 (+1)
nullZeren @ 2025-07-27T23:58 (+1)
This is the core part of my WIP critique of the working paper "The Case for Strong Longtermism" by Greaves and MacAskill.
The moral importance of future generations should not be dismissed, and in their paper The Case for Strong Longtermism, Greaves and MacAskill rightly highlight the neglectedness of long-term goals. Nevertheless, I try to show, the current case for Axiological Strong Longtermism lacks a sufficiently stable and reliable foundation to support the sweeping conclusions the authors draw. Importantly, there may be considerable overlap between near-term and long-term goals, such as strengthening institutions and democratic structures, which deserves greater attention. Striking a more deliberate balance between long-term and near-term aims may provide a more grounded path forward.
Objections from tractability issues over long periods of time, at first look, appear defendable through authors' claims about persistent states, however, in the following part of this post, I suggest such states appear more speculative and less durable upon closer scrutiny. Then, I try to show models used for expected value calculations can become unstable as agents that are part of the modeled world act in awareness of the model.
In their definition of persistent states, the authors state that the expected time for which the world remains in such states is extremely long. Below, I challenge the plausibility of the purported durations of these states.
I claim, humanity living under apprehensive conditions and in a persistently stagnant state without any meaningful progress, for a prolonged amount of time, will either 1) eventually collapse and go extinct in a duration shorter than what we can call extremely long or 2) through adaptability recalibrate its perceptions of well-being so their states can no longer be considered to have low utility. Empirical evidence (e.g., Diener et al., 1999; Easterlin, 2001; Kahneman & Deaton, 2010) demonstrates humans adapt to their circumstances, meaning future generations may not perceive their conditions as dreadful as contemporary observers do.
In the second part of my critical assessment, I step back to highlight a further issue with the models used for expected value calculations, in the context of Axiological Strong Longtermism (ASL). Greaves and MacAskill suggest that claiming counterproductivity is as likely as success, would reflect implausible pessimism. However, this dismissal seems too quick. Given the reflexive instability of predictive models in real-world settings, especially when agents respond to the model itself, the risk of counterproductive outcomes appears plausible. Even a simple causal model may become unstable under reflexive conditions when agents capable of influencing the modeled world become aware of the model and adjust their behavior in response. This issue is well-documented in economics, particularly in the Lucas critique, which shows that causal models may become unreliable once policy changes alter agent behavior (Lucas, 1976). Models used for expected value calculations are not exempt from this feedback loop, which undermines their predictive reliability.
A concrete example that illustrates this modeling issue arises from the exploitability of the Axiological Strong Longtermism, as developed in the CSL paper. They argue for focusing on improving the far future, without prioritizing any of the specific causes mentioned in the paper. The vagueness inherent in many longtermist intervention areas, combined with the acceptance of imprecise probabilities, creates significant room for exploitation. History offers numerous examples of misuse of authority using appeals to the greater good, as a tool for manipulation and control. This raises a concern about the potential misuse of such reasoning.
Another concrete scenario, which may unfold either independently or, more likely, in connection with the first, concerns the consequences of abandoning or severely deprioritizing near future goals. If present-day human cooperation across cultures and borders loses its ethical foundation, individuals may increasingly shift their concern toward smaller groups with closer ties. Instead of fostering solidarity with the broader human community, this could accelerate social fragmentation and reduce global coordination, further undermining the longtermist aims.