Incommensurability and Intransitivity in Longtermism: A Pluralist Reframe (with a note on why art matters)
By Ben Yeoh @ 2025-10-03T11:39 (+5)
Summary: intransitivity, pluralism, and incommensurable values ideas relating to longtermism; also art and institutions.
This essay argues that two under-engaged strands of moral philosophy—Larry Temkin’s work on intransitivity, essentially comparative value, and pluralism, and Ruth Chang’s account of values being “on a par”—call into question some of the core assumptions of standard longtermism. Much longtermist reasoning relies on reducing all possible futures to a single, transitive welfare scale and then maximizing expected value across astronomical stakes. Temkin’s spectrum cases and rejection of moral monism destabilize the assumption that such a scale exists or can be coherently applied. Chang’s claim that many options are neither better, worse, nor equal but genuinely incommensurable likewise undermines the idea that long-term choices can always be represented in a single metric. Taken together, these perspectives highlight weaknesses in purely consequentialist and aggregative longtermism, and point toward pluralist, non-aggregationist, and non-consequentialist approaches.
I apply the Chang and Temkin challenges to specific essays in Essays on Longtermism. I provide a probabilistic map of positions in the value theory debate (including a 20% chance that Temkin’s strong intransitivity is correct). Finally, I extend the discussion to institutional design (Tyler John’s “Futures Assembly” proposal) and to the constitutive role of art, suggesting how longtermism might take cultural value more seriously.
Introduction: hoping to emphasise some under-represented points.
This essay develops several under-represented philosophical considerations for longtermism by drawing on Larry Temkin and Ruth Chang. The ideas are well known in moral philosophy, but I argue that taking them more seriously than is customary in longtermist debates changes how we should evaluate several arguments in Essays on Longtermism.
Although Temkin’s Rethinking the Good (2012) is cited in the volume, his spectrum cases, his account of essentially comparative value, and his pluralist critique of moral monism are not, in my reading, worked through in depth; they tend to appear in bibliographies or footnotes. Engaging them head-on matters because in precisely the population-ethics contexts longtermism cares about, transitivity and single-scale aggregation are least secure. Ruth Chang’s view that some options are neither better, worse, nor equal but on a par—genuinely incommensurable—adds a complementary challenge: attempts to govern long-horizon choices with a single commensurable maximand may be incomplete even when transitivity is preserved.
If these perspectives are granted even limited scope, then longtermism benefits from a more pluralist decision posture: side-constraints, satisficing floors for near-term duties, and bounded diversification toward tail-risk reduction, rather than EV-dominance across the board. Several chapters already lean this way (e.g., Curran; Unruh; Riedener), as does the deontological discussion in Greaves & MacAskill.
What follows: (i) brief statements of Temkin’s and Chang’s key claims as they bear on longtermism; (ii) a probabilistic map of live positions in value theory; (iii) targeted applications to specific essays in Essays on Longtermism; and (iv) two constructive extensions—one on institutional design (Tyler John’s “Futures Assembly”), and one on the constitutive role of art, suggesting how cultural value should figure in a plural-objective longtermism.
Notes. Any interpretive errors are mine; readers should consult the originals. I discuss related themes with both authors on my podcast Ben Yeoh Chats, and I used ChatGPT to help organize and check structure; remaining mistakes are my own and quite possible.
Ruth Chang: Incommensurability and “On a Par”
Ruth Chang’s work offers a distinctive addition to longtermist reasoning. At its core is the claim that many values are incommensurable: they cannot be fully captured on a single quantitative scale. In her well-known account, some options are neither better, worse, nor equal, but stand “on a par.” This introduces a fourth relation into value comparisons, beyond the traditional trichotomy.
Importantly, “on a par” does not mean indecision or vagueness. Rather, it means that both options can be rationally choiceworthy, but in different evaluative dimensions. Recognizing this expands our conceptual vocabulary: not all comparisons must collapse into “better” or “worse.”
Chang further argues that when choices are “on a par,” they invite normative agency. Instead of being dictated by a single maximand, agents (individuals or societies) can commit to particular values—justice, fairness, survival, dignity—and these commitments can rationally determine action. This framework acknowledges the plurality of moral values without reducing them to one scale.
Implications for Longtermism
- Challenge to Single-Value Maximization:
Some parts of Longtermism typically assumes all futures can be ranked along one commensurable scale—expected total well-being, or something close to it. Chang’s view undermines this assumption: two futures may be “on a par” rather than reducible to one metric.
- Fanaticism and Tiny Probabilities:
Some Longtermist reasoning often relies on multiplying very small probabilities of astronomical payoffs. Chang’s framework allows us to resist the claim that such futures must swamp near-term goods. If the goods are “on a par,” we can treat them as fundamentally different kinds of value, not as quantities on one scale.
- Pluralism and Responsibility:
Parity supports a pluralist approach to longtermism. Our responsibilities to future generations may not be fully captured by maximizing welfare. They may also involve stewardship, justice, fairness, or other irreducible duties.
Note that parity and intransitivity are structurally similar challenges: both break the clean comparability assumed in expansive longtermism
Relevance to this set of essays on longtermism
Chang’s work is not cited in this volume. Several essays acknowledge issues of incomparability or aggregation, but none develop the idea of parity as a fourth relation. For example, Greaves & Tarsney’s reliance on expected-value dominance, Steele’s assumptions about comparability in neutrality arguments, and Ord’s trajectory-shaping case all presuppose that futures can be placed on a single metric. Parity offers a principled reason to resist that move without collapsing into indecision. Likewise, Riedener’s appeal to authenticity and Curran’s anti-aggregationist critique gain additional structure when viewed through the lens of parity.
In other words, Chang offers a richer vocabulary for exactly the kinds of dilemmas longtermism faces and would be a fruitful area for further research. See further section for details on specific essays.
Example: Global Poverty vs. Existential Risk
Here is a thought experiment which might help think about that this idea adds. Consider two interventions:
- Eliminate Global Poverty (Option A): Tangible, certain benefits within a generation—hundreds of millions lifted out of extreme poverty.
- Reduce Existential Risk (Option B): A low-probability intervention (say, 0.1% chance) of preventing human extinction, with a potential payoff of trillions of future lives.
Some longtermist reasoning might argue B dominates: expected-value arithmetic multiplies the tiny probability by the vast payoff, outweighing the certain gains of A.
Bring into the debate the ideas of Chang: A and B may be “on a par.”
- Option A embodies justice, dignity, and immediate relief of suffering.
- Option B embodies survival and the safeguarding of humanity’s future.
- These are different evaluative dimensions, not reducible to a single metric.
If so, then it is rational to choose either, depending on which values we commit to. Preferring A does not make one short-sighted, nor does preferring B make one fanatical. What matters is the articulation of values that guide the choice.
Contra Chang
Of course, Chang’s view is not uncontroversial. The idea that some options are “on a par” rather than better, worse, or equal is novel and not universally accepted. Critics argue that it risks introducing instability into moral reasoning, or that what appears to be parity may instead reflect vagueness, incomplete information, or rough comparability within a traditional framework (see Handfield 2015).
Nonetheless, even if Chang is mistaken in her precise formulation, taking her idea seriously broadens the scope of longtermist debate. It highlights that value comparisons may be more complex than standard models assume, and incorporating such possibilities deepens both the breadth and the philosophical depth of longtermist thinking.
Conclusion: on a par offers food for thought
Chang’s framework destabilizes a core assumption of standard longtermism: that all possible futures can be reduced to one commensurable scale and then maximized. By introducing the idea of parity, she shows that long-term choices may involve irreducible values, and that rational agency can mean committing to plural goods rather than collapsing them into arithmetic. For longtermism, this points toward a richer framework: one where pluralist, non-consequentialist commitments stand alongside, and sometimes outweigh, expected-value calculations.
Larry Temkin: Intransitivity, Comparative Value, and Pluralism
Larry Temkin’s work poses a series of interconnected challenges to the neat, maximizing frameworks often used in longtermist reasoning.
Across several decades of work—most prominently in Rethinking the Good (2012)—he develops arguments about intransitivity, essentially comparative value, rough comparability, and pluralism, all of which bear directly on the assumptions that longtermism typically makes. His conclusions are unsettling: in large-scale distributive and population contexts, we may not be able to order outcomes on a single, transitive scale, and our reasons for action may resist collapse into a single metric.
1. Intransitivity of “Better Than”
Definition. A relation is transitive if: whenever A is better than B, and B is better than C, then A must be better than C. Transitivity is the linchpin of standard maximizing and expected-value reasoning: it allows us to rank outcomes consistently.
Temkin’s claim. In realistic population and distributive cases, “better than (all-things-considered)” need not be transitive. Through spectrum arguments—developed from Parfit’s Mere Addition Paradox—he shows that tiny quality losses per person, spread across enormous numbers of people, can generate cycles: A ≻ B, B ≻ C, but C ≻ A.
Worked example (very simplified).
Option | Population | Quality of Life per Person | Intuitive Judgment |
A | 1 billion | Very high | Clearly good |
B | 2 billion | Slightly lower | Seems better overall |
C | 3 billion | Lower still | Seems better than B, but worse than A |
A ≺ B (two billion good lives beat one billion excellent lives).
B ≺ C (three billion lives at nearly the same level beat two billion).
Yet C ≺ A (quality in C is low enough that, all things considered, A seems better than C).
This cycle violates transitivity.
Implication. If transitivity fails in precisely the contexts longtermism cares about—massive numbers of future lives—then the project of ranking all possible futures on a single welfare scale may collapse. (Intransitivity and the Mere Addition Paradox, 1987; A Small Taste of Rethinking the Good, 2014.)
(This is very simplified. In the podcast with Ben Yeoh he gives a spoken through example and his book gives a detailed academic treatment. See end appendix for a longer explanation of the example - as this post is already so long, I've moved it to the end)
2. Essentially Comparative Value
Temkin also argues that some values are essentially comparative: the value of an outcome can depend on what it is compared with, rather than being fixed absolutely. This undermines the attempt to assign stable, context-independent numbers to outcomes for expected-utility calculations. If true, then moving from a local choice set (today’s policies) to a cosmic choice set (all possible futures) may actually change the ordering of options. For longtermism, this means that “just maximize one number over all futures” may be a category mistake.
3. Rough Comparability and Incompleteness
Many options are only roughly better or outright incomparable. Forcing a total, transitive order creates paradoxes; relaxing transitivity need not make agents irrational or “money-pumpable,” provided we accept roughness and adopt reasonable decision procedures. For longtermism, this matters because the relevant options—biosecurity regimes, AI policy choices, near-term aid, existential risk reduction—may not all be linearly rankable. Acknowledging incompleteness pushes towards robustness and plural decision methods, not strict EV maximization.
4. Population Ethics Roots
Temkin’s earliest papers showed that intransitivity arises precisely in the territory longtermism occupies: huge populations, small per-person changes, and trade-offs between headcount and average quality. This means that claims like “more (good) lives is always better” are philosophically fragile. Longtermist arguments that lean on astronomical numbers of future people inherit these paradoxes.
5. Pluralism vs. Monism
In Rethinking the Good, Temkin resists collapsing all moral reasons into a single welfare measure. Considerations of fairness, priority to the worst-off, desert, and relational duties resist clean aggregation. For longtermism, this undermines the view that there is one scalar maximand—expected total well-being—that normatively rules. Instead, we may need a mixed-objective approach, balancing several irreducible values.
6. Empirical Caution
In Being Good in a World of Need (2019), Temkin emphasizes how unintended consequences, market distortions, and governance failures can flip the sign of interventions, even in near-term aid. If we struggle to know the net effects of present interventions, our epistemic position in far-future contexts is even shakier. For longtermism, this suggests prioritizing interventions with clearer feedback loops, reversibility, and institutional safeguards—rather than placing all bets on speculative astronomical payoffs.
7. Example: Tiny Probabilities and Astronomical Payoffs
Scenario. Imagine we must choose between:
- Option A: A program that certainly saves 10 million lives over the next decade.
- Option B: A research effort with a 0.0001% chance of preventing an existential catastrophe, potentially saving trillions of future lives.
Standard longtermist reasoning: EV arithmetic implies Option B dominates (trillions × 0.0001% > 10 million).
Temkin’s critique:
- Intransitivity: The “better than” relation across such options may cycle or fail altogether.
- Essentially comparative value: The value of saving 10 million lives may shift depending on the set of comparisons; adding speculative cosmic outcomes may distort rather than clarify the ordering.
- Pluralism: Justice, certainty, and present obligations may legitimately constrain chasing Option B, even if its EV is higher.
- Empirical caution: The likelihood of unintended consequences or misallocation in Option B is high, given weak feedback.
Implication: It is not irrational to choose Option A over B; doing so may reflect the plural values and epistemic humility that Temkin emphasizes.
Note: Temkin’s intransitivity arguments and Chang’s parity thesis are complementary: both challenge the idea that a single, transitive metric can capture long-run choices.
Conclusion of the Temkin Section: challenges to certain longtermist reasoning
Temkin’s work challenges the deepest assumptions of longtermist reasoning. If “better than” is intransitive in population ethics, if values are essentially comparative, if many options are incomparable, and if pluralism rather than monism is true, then the longtermist project of ranking all possible futures on a single welfare scale becomes untenable. This does not mean abandoning longtermism; it means reframing it.
Temkin points toward a more cautious, pluralist, and humility-forward longtermism—one that values robustness, recognizes side-constraints, and resists the lure of tiny probabilities multiplied by astronomical stakes.
Notes on how this relates and somewhat original synthesis.
Note 1. Where are potential weak points in long termist arguments if Temkin, Chang are correct ? And where in the essays already compatible with Temkin, Chang thinking ?
Longtermism in much of the essays leans on a small set of structural commitments: that there’s a single commensurable “value” scale to maximize; that “better than” is transitive and complete across options; that we can aggregate tiny benefits/risks across vast numbers; that value is time-separable (so we can integrate it over long trajectories); and (sometimes) that there is a duty to (near-)maximize far-future value. Chang’s parity thesis and Temkin’s intransitivity/essentially-comparative value/pluralism jointly strain each of these. Below I flag the most relevant chapters.
1) One commensurable scale + EV comparisons as the default
Which essays relies on this:
- Greaves & Tarsney, “Minimal and Expansive Longtermism.” They explicitly characterize the expansive view as one where “possible effects on the far future are the main determinant of expected value comparisons in nearly all decision situations” and treat improving “the expected value of the far future” as the unifying aim across interventions.
- Their set-up also distinguishes minimal from expansive longtermism in terms of how many interventions raise the expected value of the far future and how pervasively EV dominates choice.
How Chang and Temkin challenge:
- Chang (parity/incommensurability): If many long-run options (e.g., present duties to the vulnerable vs. speculative x-risk bets) are neither better, worse, nor equal but on a par, then there is no globally faithful single cardinal scale on which EV comparisons can always decide. “Maximize one number” over all futures misdescribes what rational choice demands in parity cases (it calls for plural commitments, not scalar optimization).
- Temkin (essentially comparative value, rough comparability): If the value of an outcome depends on the comparison set, or options are only roughly comparable, then the EV ranking can shift with context, undermining the idea that there is one context-independent welfare number to maximize.
Practical upshot. Claims that expansive longtermism makes EV decisive “in nearly all decision situations” overstate what rational choice can deliver if parity and comparative dependence are common.
2) Transitivity of “better than” in population comparisons
Which essays relies on this:
- Steele, “Longtermism and Neutrality about More Lives.” Steele treats transitivity as a constraint on the “better-than” relation when exploring neutrality and its “greedy” implications at large scale. The discussion explicitly notes that relaxing transitivity/context-sensitizing choice-worthiness is possible but proceeds under transitivity to assess neutrality’s implications for longtermism.
How Temkin challenges:
- Temkin (intransitivity via spectra): Exactly in the domain longtermism cares about—huge populations with tiny quality shifts per person—Temkin’s spectrum arguments yield A ≻ B, B ≻ C, but C ≻ A. If transitivity fails precisely where neutrality/“more lives” arguments operate, the clean longtermist move from neutrality to “more future people → higher ranking” is fragile. Steele’s results depend on holding transitivity fixed; Temkin says that’s the bit in doubt.
Practical upshot. Attempts to infer strong longtermist conclusions from neutrality while keeping transitivity inherit Temkin’s challenge: the relevant comparisons may not permit a transitive total order at all.
3) Aggregation across persons (summing tiny benefits/risks to astronomical headcounts)
Which essays leans on this (and where from Curran there is already pushback).
- Greaves & Tarsney (minimal vs. expansive) frame debates in terms of raising expected far-future value across many decision contexts—i.e., aggregating small changes affecting many.
- Curran, “Longtermism and the Complaints of Future People,” develops an explicitly anti-aggregationist objection to deontic longtermism Curran’s line: you cannot simply sum distinct persons’ “complaints” to justify otherwise impermissible trade-offs.
How Chang and Temkin challenge:.
- Temkin: Aggregation over vast numbers coupled with transitivity is exactly what generates the paradoxes/intransitivities; this makes fanaticism (tiny p × huge N) decision-dominant only if we ignore the structural costs he highlights.
- Chang: If near-certain present duties and speculative tail bets are on a par, the size of the payoff doesn’t automatically swamp the competing reason; parity blocks the simple “multiply and sum” mentality.
Practical upshot. The book’s strongest EV-style arguments lose force once either anti-aggregationist deontic constraints or parity/intransitivity are admitted.
4) Time-separability and zero pure time preference
Essays which assume this:
- Greaves & Tarsney analyze how widely far-future EV should dominate; their “expected value comparisons”frame presumes a separable metric over time that can be improved “in expectation.”
- Ord, “Shaping Humanity’s Longterm Trajectory,” presents the canonical trajectory-shaping rationale (existential risk alters the long-run “integral” of value; our actions can durably change the curve). Even in the opening, longtermism is presented as taking the vast future seriously and focusing on actions that lastingly alter the far future—implicitly relying on value that can be accumulated and compared across time.
How Chang and Temkin challenge:
- Chang: If values in play (justice to current victims, stewardship, integrity) are incommensurable or on a par with aggregate future welfare, separability (and thus smooth temporal aggregation) fails: you can’t just integrate everything into one number without losing normatively relevant structure.
- Temkin (essentially comparative): If the choice set changes the ordering, then moving from a local policy set to the cosmic set of futures can reorder options, undermining the idea that there’s one fixed, context-independent quantity to integrate over time.
Practical upshot. The familiar picture—“reduce x-risk to raise the time-integral of value”—needs plural-objective supplementation to be decision-worthy under parity and comparative dependence.
5) A deontic demand to (near-)maximize the far future
Where the book explores it—and where it is resisted from within.
- The volume canvasses axiological strong longtermism and its deontic counterpart.
- Unruh, “Against a Moral Duty to Make the Future Go Best,” rejects the duty to (near-)maximize, arguing beneficence is an imperfect duty; special obligations and non-maximizing reasons can rightfully constrain far-future maximization.
How Chang and Temkin add:
- Chang: The imperfect-duty picture fits parity: multiple non-dominant, incommensurable values can rationally guide choice without a master maximand.
- Temkin: Pluralism about value (fairness, priority, relational duties) is a positive thesis: moral reasons do not collapse into one additive metric. So even if the far future carries great value, side-constraints and plural goods can bound maximization.
Practical upshot. Unruh’s internal critique gets theoretical backing from Chang/Temkin: there’s no general rational requirement to chase the highest EV-to-the-cosmos option if duties/values are plural and sometimes incomparable.
6) Fanaticism and the “astronomical stakes” lever
Where the book engages this”
- Riedener, “Authenticity, Meaning, and Alienation,” lays out the standard astronomical-numbers longtermist case (enormous headcounts; tiny probabilities still yield huge EV) as the foil for his argument that such motivation risks inauthenticity/alienation.
- AskelI & Neth, “Longtermist Myopia,” diagnose tensions between future fanaticism and near-term moral reasons, aiming to show practical decision-making will often be more near-term than critics fear.
How Chang and Temkin challenges:
Temkin: Fanaticism is an artifact of assuming a single transitive EV scale that tolerates multiplying tiny probabilities by astronomical payoffs. Under intransitivity/rough comparability, those “dominance” claims lose their mandatory force.
- Chang: If present-directed duties and far-future windfalls are on a par, lexical EV dominance is not rationally compulsory. Riedener’s authenticity/meaning considerations are a live, non-aggregative reason—exactly the kind parity legitimizes.
Practical upshot. Some essays (Riedener; Askell–Neth) align with a Temkin-Chang-friendly recalibration: avoid tail-dominated gambles, accept plural reasons for near-term focus, and treat astronomical-EV arguments as advisory, not decisive. This is an area of fruitful extra research.
If Chang’s parity and Temkin’s intransitivity/essentially comparative value are even partly right, then several linchpins of the OUP case—single-scale EV dominance, transitive population rankings, large-N aggregation, and a duty to maximize the far future—do not hold in general. The chapters that already relax those commitments from within (Curran, Unruh, Riedener;) may be worth exploring further.
Note 2. What are the chances Temkin is correct? And what are implications?
- If Temkin is correct about intransitivity, then some of the core assumptions behind longtermist reasoning are destabilised. Much of longtermist argument presupposes a stable ordering of outcomes across very large scales. If “better than” can generate cycles or fail to be transitive, then the neat comparisons longtermism relies on become much less secure.
- Temkin is taken very seriously as a philosopher of note. Not all philosophers agree with his position, but I would judge perhaps ~20% of other serious philosophers to be broadly in his camp of accepting some genuine intransitivity, perhaps more depending on the field (especially in population ethics).
- On the opposite side are the “transitivity traditionalists” (e.g. Shelly Kagan, Yew-Kwang Ng, Peter Vallentyne, James Griffin). This group takes transitivity to be non-negotiable and typically treats Temkin’s spectrum arguments as misapplied intuitions or cases of vagueness. I would judge them as perhaps 20–30% of the field.
- The middle ground: “Revisionists about Comparability.”
- They revise the structure of “better than” so it is not just a simple strict ordering.
- Examples: John Broome (rough equality), Ruth Chang (“on a par”), Derek Parfit (indeterminacy), Hilary Greaves (indefiniteness).
- This camp also includes views I’d call non-standard comparability, where options remain comparable, but not always in a clean, transitive way — expanding the logical space beyond “better/worse/equal.”
- I would judge perhaps ~50% of philosophers fall into this camp.
Note: These percentages are heuristic estimates, intended to make the philosophical landscape legible, not results of formal surveys. (I am not an academic philosopher!)
Conclusion: This mapping suggests there is a significant live possibility that Temkin’s view is correct and that certain longtermist ideas may rest on shaky comparative foundations. At the very least, the middle-ground revisionist camp (e.g. Chang, Broome, Parfit, Greaves) must be taken seriously — their work indicates that even if transitivity survives, longtermism cannot simply assume standard comparability without argument.
Note 3. How much of this essay is original.
Reflective Note on Novelty of Synthesis
In reviewing these essays - tentatively - I believe the following elements of my synthesis may be somewhat novel, useful, and unusual. While the original ideas (Temkin, Chang etc.) have been stated plenty elsewhere, I have not seen them drawn together like this. These notes are tentative.
- Explicit tripartite map of positions.
I separate the field into three main camps — Temkin-style intransitivists (~20%), revisionists about comparability (~50%), and transitivity traditionalists (~30%). While philosophers often describe these positions qualitatively, setting them out as a structured map is clearer and more systematic than most expositions.
- Weighting and probabilities.
Assigning approximate percentages to each camp treats the field probabilistically, borrowing from decision-analysis style reasoning. Philosophers rarely quantify like this, but such estimates make the landscape more legible and allow us to weigh the plausibility of different risks to longtermist reasoning.
- Integrating Ruth Chang’s “on a par” into population ethics and longtermism.
Chang’s work is highly influential in the philosophy of value but is not usually brought into direct conversation with longtermist debates or these ELoT. Connecting “on a par” to the issues raised by Temkin is, I believe, a somewhat original step worth stressing.
- Direct application to longtermism’s vulnerability.
Much longtermist reasoning assumes stable, transitive comparability of outcomes. By highlighting that if Temkin (or even Chang’s comparability revisionism) is correct, certain longtermist ideas may rest on shaky comparative foundations, I make a link that is not often foregrounded in the existing literature. I also provide a tentative non-zero estimate of >20% that Temkin is correct that weakens much longtermist reasoning.
Taken together, these elements suggest that while the raw arguments themselves are not new, the way they are combined here — with probabilistic weighting, clear mapping, integration of Chang, and explicit application to longtermism — is a somewhat original contribution that could sharpen how these debates are framed.
A note on Tyler John’s essay and art.
Perhaps this deserves a separate blog, but for comprehensiveness I am leaving my comment and observation here.
John tackles what he sees as a core problem.
Political short-termism: Modern democracies overweight near-term interests and systematically underrepresent future generations.
The root cause: lack of accountability mechanisms tying current policymakers to the welfare of future people.
John’s proposal: Retrospective Accountability.
Since we cannot give votes to the unborn, John suggests incentives that tie today’s decisions to future judgments. His idea is a Futures Assembly:
- Selection: Citizens chosen by lottery (sortition).
- Mandate: Debate and recommend policies with explicit focus on the long term.
- Retrospective evaluation: Thirty years later, an independent review assesses outcomes. Pensions rise if decisions are judged to have helped future people, fall if not.
Challenges: How do we measure long-term impact? Who judges after decades? Might members game the metrics or focus on the near term? And would existing systems tolerate such a body?
Significance for longtermism: John reframes longtermism as institutional design, not just ethics. The Futures Assembly hybridizes deliberative democracy (citizens’ assembly), incentives (pension adjustments), and longtermist aims (explicit future focus).
What Po.lis offers
Po.lis is an open-source platform for scalable public deliberation.
Distinctive features:
- Opinion clustering: uses statistical models to group participants into clusters of agreement/disagreement, visually mapping where society converges or diverges.
- Consensus surfacing: highlights statements supported across clusters, which can guide coalition-building.
- Scalability: enables thousands of voices to be aggregated efficiently, beyond the size of a typical citizens’ assembly.
- Transparency and feedback: participants see where they align relative to others, building legitimacy and reducing polarization.
Connection to John’s proposal
- Enhancement of epistemic quality:
- Futures Assembly members could use Po.lis to map the distribution of long-term concerns among the broader citizenry.
- This would counteract the risk of assembly members projecting their own idiosyncratic values.
- Retrospective evaluation:
- The evaluators, decades later, could also use Po.lis or similar platforms to gauge whether citizens believe long-term interests were advanced.
- Legitimacy and buy-in:
- Adding a technological layer like Po.lis makes the Futures Assembly less elitist, more democratic, and more transparent.
- It helps avoid the critique that retrospective accountability depends on a small set of opaque evaluators.
In sum, an additional benefit of John’s retrospective accountability model could be the integration of emerging technological mechanisms of large-scale deliberation, such as the Po.lis platform. Po.lis allows thousands of participants to submit and vote on statements, automatically clustering opinion into patterns of agreement and division. Applied to the Futures Assembly, such tools could surface what citizens today judge to be most salient long-term risks and opportunities, and later help evaluators understand how public values evolve over decades. By providing a structured map of consensus and contention, Po.lis would enhance both the epistemic quality and democratic legitimacy of the Futures Assembly’s work. Of course, such technologies carry their own limitations—digital participation biases, or a tendency toward short-term framings—but when combined with retrospective accountability they may help address the perennial challenge of clustering and representing future-oriented concerns in a pluralistic society.
Last note on art and long-termism
Reading John on institutional design and what may or may not last into the future, made me reflect on art. Perhaps in part as I am a playwright and artist amongst other things. Longtermism has not taken art much into account, nor drawn on any of the philosophy of art in these essays from my reading. I offer some reflections.
Longtermist reasoning often treats the future as a very large container for welfare. But some goods look constitutive of a life worth living rather than fungible inputs—art being a paradigmatic example. On Ruth Chang’s view, commitments to some constitutive goods may be on a par with aggregate welfare improvements: neither better nor worse nor equal on a single scale, but normatively decisive when we commit to them. Temkin’s pluralism likewise resists collapsing such goods into a monistic maximand. If so, EV-dominant longtermism misdescribes choices where survival trades off against meaning.
1. The value of art and creativity
Human beings, across time and culture, consistently invest extraordinary effort and resources into art. From cave paintings to contemporary symphonies, from indigenous dance to digital media, creative expression is one of the most persistent features of human life. Measuring its value is difficult. Economists can estimate the art market; psychologists can show effects on well-being; philosophers can argue about intrinsic aesthetic goods. But none of these capture the whole. What is clear is that art matters — it is deeply embedded in how people make meaning, how cultures transmit identity, and how communities bind themselves together. If longtermism is concerned with what makes the future worth preserving, then art and creativity are part of that package. To ignore them risks reducing the future to survival stripped of its richness.
2. Art’s long survival
Unlike many political institutions or economic systems, works of art often endure beyond their original context.
- Cave art in Chauvet and Lascaux (France): tens of thousands of years old, still legible and moving, while all the social institutions of their makers are gone.
- The Nazca Lines (Peru): vast geoglyphs etched between 500 BCE and 500 CE remain striking, long after the societies that made them faded.
- The Uffington White Horse and the Cerne Abbas Giant (Britain): chalk figures cut into hillsides, dating back over a millennium (and possibly longer), have been maintained through ritualistic care that outlasted monarchies and governments.
- Medieval cathedrals, classical sculptures, epic poems, and musical scores: cultural legacies that anchor identity across centuries.
Artworks endure not only physically but emotionally: people still feel awe before Paleolithic handprints, just as they do before Renaissance frescoes. This persistence makes art one of humanity’s most reliable time-capsules. I can argue politics and states rise and fall; artistic expression endures.
3. A Longtermist Art Institute
If art is both inherently valuable and unusually durable, longtermists should consider institutions that take art seriously. Such an institute could serve two functions:
i. Valuing art over the long term.
Just as we invest in archives, seed banks, and scientific knowledge preservation, we might deliberately preserve and create art with an eye to millennia. The Long Now Foundation points in this direction with its 10,000-Year Clock, the Rosetta Disk, and Longplayer: works that are part engineering, part aesthetics, designed to endure and provoke. A dedicated longtermist art institute could sponsor similar projects — artworks conceived with built-in longevity, redundancy, and legibility, capable of surviving shifts in language, culture, and technology. It could fund rituals and institutions that maintain and refresh existing works, recognizing that stewardship is as important as creation.
ii. Igniting imagination for longtermism itself.
Art does more than endure; it inspires. Numbers and probability tables rarely stir the heart. Stories, images, and sounds do. An institute for longtermist art could commission works that dramatize deep time, future risks, or visions of flourishing worlds. Exhibitions, performances, and public art could make the abstract tangible. This is what the Long Now Foundation already experiments with: using art and culture to widen our time horizon. By investing in art, longtermists could generate enthusiasm, creativity, and new audiences for ideas that otherwise remain abstract or forbidding.
Conclusion on art longtermism
Longtermism often focuses on technical risk reduction, policy, and philanthropy. These are vital. But the human future is not only a question of survival; it is also a question of meaning. Art, one of our most durable creations, provides a bridge across time, a signal that we valued beauty, expression, and imagination. Longtermists should therefore invest not only in science and governance but in cultural institutions that preserve, create, and communicate art with long horizons in mind. A Longtermist Art Institute would signal that the future we are working to secure is not barren or merely safe but worth inhabiting — rich with creativity, inspiration, and the same timeless human drive to make and to share.
There is space for more than what Long Now offers, although I think Long Now offers a fascinating perspective. Perhaps in reaction to academic essays this is not the reflection to have yet I somehow think that this is a meaningful avenue to explore for longtermist thinkers.
Finally, I’d like to offer some reflections on what art philosophy might give long-termists.
Value of Art
- Intrinsic value vs. instrumental value
- Philosophers debate whether art is valuable in itself (as an aesthetic good) or valuable for its effects (e.g. pleasure, moral education, social cohesion).
- Longtermism can lean on this: even if survival and well-being are secured, without art the future would lack an important intrinsic dimension of value.
- “Art for art’s sake” (19th century slogan, but with Kantian roots): suggests that some things are valuable simply because they are beautiful or imaginative, regardless of practical use.
2. Art and Human Flourishing
- Aristotle & flourishing (eudaimonia): Art and tragedy provide catharsis, education, and emotional attunement — part of living a good human life.
- Martha Nussbaum: literature cultivates empathy and moral imagination, essential for political life. Longtermists can see art as training the moral imagination across generations.
3. Art as Meaning-Making
- Heidegger: art “sets truth to work”; it discloses worlds, not just objects. For longtermists, art may help disclose possible futures — making otherwise abstract futures experientially real.
- Susanne Langer: art is a symbol system for expressing what is otherwise inexpressible (feelings, intuitions, metaphysical questions). That’s a resource for grappling with uncertainty about the very long term.
4. Durability and Legacy
- Walter Benjamin (“The Work of Art in the Age of Mechanical Reproduction”): art has an “aura” tied to uniqueness and tradition. Reproducibility changes how we relate to art, but also allows durability and distribution. Longtermists might exploit this tension — balancing unique monuments (like Long Now’s Clock) with mass-distributed artifacts (digital or ceramic archives).
- Arthur Danto: art is defined by being placed in an interpretive “artworld.” Longtermism has to consider whether our cultural frameworks will still exist to interpret works — or whether we should design art to be legible outside our current “artworld.”
Philosophy of art gives three key arguments longtermists can adopt:
- Constitutive value: Art is part of what makes human life good (future lives without art would be impoverished).
- Epistemic/expressive role: Art helps us perceive and imagine long time horizons, risks, and possibilities in ways analytic philosophy or statistics cannot.
- Preservation and transmission: Art has shown unusual durability, but it needs institutions (rituals, archives, interpretive communities) to carry meaning forward.
Philosophy of art reminds us that art is not an optional luxury but a constitutive good: it shapes how we perceive meaning, disclose worlds, and live flourishing lives. Thinkers from Aristotle to Heidegger, Nussbaum, and Dewey stress art’s role in moral imagination, empathy, and future-orientation. For longtermism, this means that securing art across time is not secondary but central to preserving value. Moreover, debates in aesthetics about preservation, interpretation, and cultural practices highlight the institutional scaffolding needed to ensure art remains legible and alive for future generations. Taken together, the philosophy of art suggests that art is both a good worth preserving in itself and a medium through which longtermist ideas can be rendered emotionally and imaginatively compelling.
If longtermism is about making the future not only safe but worth living in, then art must be part of the picture.
Appendix note, counterarguments and references
Counterarguments and Replies
1. Vagueness Objection
Counter: What looks like parity or intransitivity is just vagueness or incomplete information; with better precision, transitivity is restored.
Reply: Chang and Temkin distinguish vagueness from genuine structure. If more detail or precision does not resolve the comparison, then we may indeed face irreducible parity or intransitivity, not just epistemic fog.
2. Irrationality / Money-Pump Concern
Counter: If “better than” is intransitive, agents could be cycled and exploited (money-pumped). That shows intransitivity is irrational.
Reply: Temkin and others show that intransitive but reasonable preferences can be insulated with decision procedures(e.g., satisficing floors, constraints, regret-minimization). Cycles in value relations don’t automatically yield cycles in choice if agents adopt rational protections.
3. Paralysis / Indecision Worry
Counter: If many options are incomparable or “on a par,” longtermism risks paralysis — no action could be justified.
Reply: Chang’s framework makes clear that parity cases invite normative agency and commitment: we can still act by affirming plural values (justice, stewardship, fairness) rather than collapsing into indecision. Parity broadens rational action, it doesn’t block it.
4. Conservatism Objection
Counter: Pluralism and parity just license status quo bias — rejecting expected-value maximization lets us avoid bold long-term bets.
Reply: On the contrary, parity justifies bounded diversification: still investing in long-horizon risks, but without letting tiny probabilities swamp all other goods. It balances ambition with robustness, not conservatism with paralysis.
Note: If this essay does win any prize, I’d like to offer in advance that the money goes directly to charity. To be discussed on what is tax and bureaucratically efficient.
AI mostly via ChatGPT 5 has helped shape this essay. It has also helped me find references and sources. Errors may well be possible in my thinking and essay as I’m in no way close to any training in formal philosophy.
Temkin example (see podcast)
The setup: forcing people to imagine for themselves or loved ones
Temkin explicitly frames the thought experiment so that the audience imagines it happening either to themselves or to someone they love. This removes the “academic detachment” and forces a visceral, intuitive judgement.
2. The first scenario (“Door A”): 15 mosquito bites per month + 2 years of extreme torture
- The baseline is a very long life with a nuisance: 15 mosquito bites per month forever.
- Added to this is a single episode: two years of the most excruciating pain imaginable (graphic examples of torture).
- After the two years, a magic pill erases your memory of the torture, but you still endured it.
- This is meant to be a qualitatively catastrophic experience added to a minor ongoing annoyance.
3. The second scenario (“Door B”): 16 mosquito bites per month, no torture
- Same very long life.
- Instead of torture, you simply have one extra mosquito bite per month.
- This is intended as a trivial worsening of the baseline nuisance.
4. Audience responses to the two questions
- When asked which they’d pick, almost everyone says they would take Door B (the 16 bites) rather than endure two years of torture plus 15 bites.
- This is robust across hundreds of audiences worldwide.
- People also almost universally endorse, in other contexts, “slightly more intense pain for a much shorter duration” as preferable to “slightly less intense pain but vastly longer.”
5. The generalisation: the “stepwise” choice problem
- Temkin then shows the same logic across a series of comparisons:
- Two years of very intense torture vs. four years of slightly less intense torture. People say two years is better.
- Four years vs. eight years of somewhat less intense torture. People say four years is better.
- Eight years vs. sixteen years of still milder torture. People say eight years is better.
- Continue stepping down the intensity and increasing duration until the “pain” becomes trivial (like mosquito bites).
- At each step, everyone agrees A is better than B.
6. The paradox of transitivity
- If “better than” is a transitive relation, then if A > B and B > C and C > D all the way down, A must be > final outcome.
- But the first scenario (2 years of torture + 15 bites) is judged worse than the final scenario (16 mosquito bites a month and no torture).
- This violates transitivity: our pairwise judgements lead to a conclusion we reject.
7. Why Temkin thinks this isn’t a cognitive mistake
- Psychologists like Tversky have shown people often make intransitive choices because of cognitive biases.
- Temkin argues that here, our intuitions aren’t mistakes: we’re applying different principles to different comparisons.
- When pains are similar in kind (“aggregation cases”), we add up the quantity/duration (“additive aggregation”).
- When one pain is qualitatively catastrophic and the other trivial (“anti-aggregation cases”), we reject additive aggregation—no number of mosquito bites could outweigh two years of torture.
8. His philosophical point
- Our values aren’t governed by a single, consistent additive principle.
- Different “comparison contexts” activate different moral principles (aggregation vs. anti-aggregation).
- That’s why intransitivity arises: the principles shift as the scenario changes.
- This challenges the neat transitivity axiom underpinning utilitarian and effective altruist cost-benefit reasoning.
References
Selective references.
Temkin
- Temkin, Larry S. 1987. “Intransitivity and the Mere Addition Paradox.” Philosophy & Public Affairs 16(2): 138–187.
- Temkin, Larry S. 1996. “A Continuum Argument for Intransitivity.” Philosophy & Public Affairs 25(3): 175–210. doi:10.1111/j.1088-4963.1996.tb00039.x.
- Temkin, Larry S. 1993. Inequality. New York: Oxford University Press.
- Temkin, Larry S. 2012. Rethinking the Good: Moral Ideals and the Nature of Practical Reasoning. New York: Oxford University Press. doi:10.1093/acprof:oso/9780199759446.001.0001.
Chang
- Chang, Ruth, ed. 1997. Incommensurability, Incomparability, and Practical Reason. Cambridge, MA: Harvard University Press.
- Chang, Ruth. 2002. Making Comparisons Count. New York: Routledge. doi:10.4324/9781315054391.
- Chang, Ruth. 2002. “The Possibility of Parity.” Ethics 112(4): 659–688. doi:10.1086/339673.
- Chang, Ruth. 2005. “Parity, Interval Value, and Choice.” Ethics 115(2): 331–350. doi:10.1086/426307.
- Chang, Ruth. 2013. “Grounding Practical Normativity: Going Hybrid.” Philosophical Studies 164(1): 163–187. doi:10.1007/s11098-013-0092-z.
- Chang, Ruth. 2016. “Parity, Imprecise Comparability and the Repugnant Conclusion.” Theoria 82(2): 182–214. doi:10.1111/theo.12096.
- Chang, Ruth. 2014. “How to Make Hard Choices.” TED Talk.
Handfield
- Handfield, Toby. 2016. “Essentially Comparative Value Does Not Threaten Transitivity.” Thought 5(1): 3–12. doi:10.1002/tht3.188. (Online first in 2015.)
Ben Yeoh Chats podcasts (Larry Temkin; Ruth Chang).
Edits v1. For formatting issues, and added the Temkin example at end for fuller explanation.