Human Empowerment versus the Longtermist Imperium?

By Jackson Wagner @ 2025-10-21T10:24 (+18)

Intro

Longtermism is perceived by some of its critics as having an extremist, potentially totalitarian nature; of seeking to slow down progress in a way that contributes to societal stagnation and rigidity.  Unfortunately, longtermism’s loudest critics have not been very coherent or detailed in their critiques, offering only vague peans to techno-optimism, ominous prophecies about the antichrist, and other assorted vibes.  Thus, it falls to us (in particular, to me) to steelman their critiques for them.

In this essay, I attempt:

Finally, note that this essay was written following a “hits-based” philosophy.  Intuitively, it might seem like the difference between a hard-hitting, piping-hot take versus a lackluster dud of a critique might be somewhere around 5x - 10x difference in subjective quality.  However, I expect that in reality, the difference in impact between my most versus least successful critiques of longtermism could be very large (eg, perhaps 10,000x or more).  Thus, in an attempt to maximize my odds of winning the contest the wellbeing of trillions of digital minds in my future light-cone, I have written a somewhat meandering essay that jumps around between a number of related critiques.  With luck, my best ideas will be able to more than repay the time spent considering all the others.

The Long Arm of “Cultural Longtermism”

Longtermism as the latest incarnation of inexorably rising societal risk-aversion

Longtermism isn’t the first time that people have come up with the idea of a grand civilizational project to collectively mitigate catastrophic risks.  When authors are imagining “longtermist states” or “longtermist political philosophy” I think it would be helpful to situate longtermism in the context of other grand international projects, both to learn from their successes and attempt to avoid their failures.

Of course, through history there have been innumerable political alliance systems (like the "Metternich System" preserving conservative monarchies in the 1800s, or competing alliance networks of the like US vs USSR during the cold war, or etc).  And there have been many individual social movements and institutions that the EA movement has compared itself to, such as the Fabian Society or Mohism.

But in some ways, I think the closest prior analogues to the grand aspirations of modern longtermism -- successful movements organized against perceived existential risks that fundamentally altered the course of civilization and remain defining forces in our world today, are the following:

Longtermism and its discontents

Politically right-wing critics of longermism often (somewhat rightly IMO) see the idea of longtermism as an extension of one or both of these prior movements. This might not seem so bad -- liberalism, after all, is perhaps the most successful ideology of all time, having overseen an unprecedented era of human thriving.  But critics percieve a dark side to longtermism, and to the “proto-longtermist” anti-x-risk movements that preceded it:

These criticisms might seem far-fetched -- let’s be real, who could be opposed to spending just 1% more on mitigating existential risks, such an obviously important, neglected, and tractable cause?  But critics perceive a creeping ratchet, a neverending series of demands (after all, who could be opposed to spending just 1% more on nuclear-reactor safety?  and then 1% more after that?) that slowly eat away at human freedom.  In the limit, critics fear that longtermism would create the existential risk (in the form of permanent technological stagnation enforced by coordinated global governance) that it was originally designed to stop.

Personally, I think it’s reasonable to place the phenomenon of longtermism in a historical context where, just as individuals spend a higher and higher percentage of their income on healthcare as they get richer, societies likely tend to become more and more risk averse as they build wealth.  This has many positive effects (liberalism, environmental conservation, and mitigation of existential risk are good things!), and is probably positive on net, but also has some significant downsides.  One might call this trend a “rising tide of risk aversion” or perhaps “cultural longtermism”.  Critics are opposed to (parts of) this rising tide, such as increasingly strict regulations across many areas of technology and society, and they see longtermism as a force that, whatever its specific effects on appropriately minimizing legitimate risks, also has the general effect of encouraging / legitimizing broader cultural risk aversion and its accompanying risks of stagnation and globalized centralization.

I think that individual longtermists should be more self-conscious of their place in this broader societal trend.  In some places, increased societal risk aversion is exactly the right move.  But in other places, as critics contend, the rising tide of “cultural longtermism” could be causing huge distortions and derangements that hamper civilizational progress, and might even amount to a substantial existential risk in itself.  Longtermists should take care to balance these risks!  And, more specifically, although the appeal of “swimming with the tide” often creates many good arguments for tractability / political feasibility of one specific cause versus another (like x-risk mitigation versus research into experimental transhumanist goods), it is also an underappreciated argument for the neglectedness and importance of causes that “swim against” the cultural tide.

Are we already living in a world of oppressively burdensome longtermism?  

In Chapter 19 of Essays on Longtermism, Owen Cotton-Barratt and Rose Hadshar imagine a spectrum from today’s world (where, they imply, only a very small percent of GDP is spent on longtermist goods), to a “partially longtermist society” that devotes 2% - 10% of their resources towards these goals, to an “implausibly strict longtermist society” and/or “strict longtermist state” where the ruling class engineer all of society to maximize the amount of resources devoted to longtermism (potentially directing “somewhere between 10% and 85%” of its resources to that end).  In Chapter 18, Hilary Greaves & Christian Tarsney also contrasts a “minimal” 2% allocation to a more extreme scenario where 71% of GDP is directed to longtermist goods.  Both essays conclude that their most extreme scenarios are politically infeasible -- the population of such a “strict longtermist state” would never accept such extreme sacrifices for the sake of the distant future.

But are such extreme sacrifices really so implausible?  Indeed, are we already making such sacrifices already, without even noticing?

The Strict Culturally-Longtermist State

It’s true that, in the grand scheme of things, the world isn’t spending much on AI alignment research or pandemic prevention.  But how much are we already paying for risk-mitigation policies across society?

Tally all these up, and, relative to an idealized pro-growth counterfactual world, we might already be living in a world where we’re sacrificing 50%+ of our counterfactual wealth at the altar of societal risk-aversion!

Admittedly, no modern longtermist ever asked for clinical trials to be made more onerous, or for housing to become more difficult to build.  Nor do most of these costs even seem plausibly related to reducing the originally-targeted risks of nuclear war, environmental collapse, and right-wing militarism!  But (as the proponents of public-choice theory endlessly remind us) such unintended consequences are a fact of life when trying to make policy in a complex world full of uncertainty and competing influences!

Invisible Graveyards Rule Everything Around Me

Aside from the fact that the various economic & health losses above are all unintended consequences, they have one other thing in common.  The losses themselves are largely hidden; they are invisible graveyards.

In Chapter 18 and 19 of “Essays on Longtermism”, the idea of a “strict longtermist state” is dismissed because the population would refuse to make such direct and obvious sacrifices as being overworked and seeing 70% - 80% of their income taxed and directed towards distant longtermist goals.  But this strikes me as an unrealistic and naive portrayal of how a strict longtermist state would really be architected.  Just as tax increases are more politically palatable when levied indirectly (as tariffs, or corporate / VAT taxes, or seigniorage, or etc, rather than directly on citizens as income / payroll / wealth / sales taxes), I imagine that the biggest costs imposed by a strict longtermist state on its citizens would be levied indirectly -- more and more “invisible graveyards” of foregone progress, foregone growth, restricted technologies, curtailed human freedom, and so forth.

This argument raises the political feasibility of implementing a strict-longtermist state, and thus legitimizes fears that such a state may arise.

The Elusive Counterfactual

On the other hand!  So far I’ve been comparing the “invisible graveyards” created by risk-aversion, to an idealized future where unfettered technological progress leads to a whiz-bang utopia of abundance and human liberty.  But perhaps if the forces of liberalism and anti-nuclear activism had been weaker, we would not in fact be sitting in a nuclear-powered space station around the moon right now.  Maybe we would be sitting in an irradiated crater in the aftermath of a global thermonuclear war.

Pictured: the magnificent Concorde, supersonic martyr of the lost glorious retrofutu-- oops, wait, that’s actually a prototype nuclear bomber forged amid the gnashing madness and suicidal paranoia of the Cold War…

Obviously a devastated wasteland would have very low GDP, flipping the sign on everything I said about the costs of societal risk-aversion.

The difficulty of figuring out the right counterfactual, added on top of the “base rate tennis” of trying to figure out what unintended costs should count as consequences of longtermist-style thinking, and the fact (mentioned by Greaves & Tarsney) that many longtermist interventions have “co-benefits” for present generations, makes it impossible to precisely work out how much humanity is currently sacrificing for (or already profiting from) long-term x-risk reduction efforts.  So, while the longtermist can often fairly plead “look at this specific project -- it’s clearly underfunded, clearly worthwhile, unlikely to have dramatic unintended side-effects!”, and may further argue that the overall cost-benefit results of societal risk aversion have been positive, critics may also reasonably argue that the overall effects have been negative.  Ultimately adjudicating the debate is a problem that can only be left to future generations to answer.

The Streelight Effect Warps Modern Longtermism Towards an Exclusive Focus On X-risk Mitigation

“For tractability, let us restrict the question to investment in existential safety” -- Greaves & Tarsney, Chapter 18

“Enough with this slander by association!!”, I hear you cry out. “Begone with your low-decoupling insistence that the important work of mitigating narrow, specific x-risks necessarily requires smothering society under a blanket of across-the-board safetyism!  Where is your critique of what we, actual practicing longtermists, are actually doing today?!”

Indeed, I hear you.  So -- while I think the cultural-longtermism critique has real merit -- let’s move on.

There are lots of reasons why x-risk mitigation is the centerpiece of applied longtermism today.  Here are a few:

Of course, other things equal, it is better to work on problems that are amenable to rigorous analysis than problems where clear analysis is harder.  But other things aren’t equal, and at this point, I fear that x-risk’s amenability-to-analysis is fueling an increasingly unhealthy lopsidedness within longtermism.

EAs are all too familiar with how streetlight effects can distort global priorities

Here are some facts which are sadly familiar to many longtermists:

Some of the discrepancy here comes from value differences between ordinary people and longtermists (with longtermists placing special emphasis on extinction risks), but even from a non-longtermist perspective, these seem like a severe misallocation of societal resources.  Why does this happen?

I think what’s going on is that a variety of “streetlight effects” steer effort towards predictable, known, quantifiable threats and away from more “speculative” (though larger) risks:

All of those bullet points are essentially tractability arguments for why you should work on solving climate change instead of AI risk.  Why bother with AI at all; it’s so intractable!

Yet despite all this, I believe AI risks are much more important to work on, for importance and neglectedness reasons.

Nevertheless, the streetlight effects are calling from inside the house

I believe there’s a similar dynamic going on within longtermism:

It’s not that nobody has ever thought of these risks, or that nobody is working on them. Many people are; that's great! But the fact that they’re less amenable to analysis still leaves them systematically neglected.

In my view, these unbalanced priorities don’t just mean that longtermism risks slightly failing to do quite as much good as an idealized version of itself could do.  In the same way that slight changes in differential technological progress can possibly lead to quite different path-dependent futures, I think that individually small streetlight effects can compound (as in the global-warming-versus-AI case) to quite large differences on a group level.  If we badly misprioritize flourishing futures, indirect interventions, and non-extinction x-risks, we risk leaving incredibly valuable actions on the table.

It’s also possible that unbalanced priorities in longtermism could risk actively causing harm.

Total Hedonic Utilitarianism and the Denial of Death

“Sure, he talks a good game about freedom when out of power, but once he’s in – bam! Everyone’s enslaved in the human-flourishing mines.” -- Slate Star Codex blog tagline

Yes, yes, of course maximizing utilitarianism gives off totalitarian vibes

If I was a total hack, here’s an easy critique I could make, an all-too-common extension of the cultural-longtermist critique:

“Wow, get this -- Greaves & Tarsney contrast ‘minimal’ versus ‘expansive’ visions of longtermism.  But even their ‘minimal’ version isn’t nearly minimal enough to avoid being oppressively burdensome and creepily controlling!  Look at the sweeping aims that even the seemingly narrow goal of x-risk-mitigation requires!”

But actually, Greaves & Tarsney make the fair point that it isn’t longermism in particular that creates these totalitarian vibes.  The real problem is just the fundamental nature of consequentialism.  Maximal consequentialism is enough to make anything seem totalitarian! As they write:

“If the implications of an axiological longtermist thesis together with maximizing consequentialism strike one as overly demanding, then (absent some other reason for doubting the axiological longtermist claim) the natural response is to reject maximizing consequentialism, not to revise one’s axiology or one’s empirical beliefs.”

(For more on the perils of consequentialism, see Joe Carlsmith’s critique of Yudkowsky’s concept of the fragility of value, and Holden Karnofksy’s post about EA & maximization.)

Of course, one response to this defense would be to note that in practice, consequentialism seems like a pretty large part of longtermism, so the question of whether totalitarian vibes are coming from the consequentialist part of longtermism, or the other parts of longtermism, seems irrelevant when all the parts usually come bundled together.

But I actually believe there are notable totalitarian, anti-human vibes that come from elsewhere, and deserve special consideration.  

The “total hedonic” part of total hedonic utilitarianism creates its own, separate problems.

If you talk to longtermists, most of them will say that they’re not strict total hedonic utilitarians.  Am I then about to attack a strawman?  

I don’t think so -- despite everyone disavowing it, total hedonic utilitarianism seems to pops up all the time in EA analyses (including not just longtermism, but also animal welfare, global development, and more), often as an unstated background assumption.  I suspect this happens because total hedonic utilitarianism makes it easier to analyze problems, by providing a starting framework & a simplifying assumption.  The streetlight effect strikes again!

Now, of course, “all models are wrong, some models are useful”.  Simplifying assumptions are often necessary!  But the ubiquitous background assumption of total hedonic utilitarianism is, IMO, corrosive to individual liberty and human empowerment.

Hedonic utilitarianism dissolves the value of the unique human individual into atomized, interchangeable qualia-moments

In the standard way of longtermist reckoning, 10 people living 40 happy years is the same number of QALYs, all else equal, as 5 people living 80 happy years.

But personally, I would like to live 80 years and not 40 years!

One can of course stipulate that this is already accounted for in the calculation -- the disutility of my outrage at being cut down in my prime could be exactly offset, in the thought experiment, via the provision of other goods.  Or perhaps there are other ways of trying to resolve the dilemma.One could say that nonexistent people would like even a short existence, and this might balance out the preference of existing people for longer lifespans, but this view strikes me as a somewhat absurd perspective that few hold; most humans value future/potential generations but also recognize that actually-existing people have more of a sort of property-rights claim on existence than merely potential people.

But regardless of the defensive gymnastics that can be performed here, in practice, foregrounding interchangeable units of qualia means dissolving value downwards -- from units of unique human individuals to infintesimal atom-like moments of qualia-experience (akin to what some call “empty individualism”).

Furthermore, the vast diversity of potential qualia-experiences (the landscape of which we can hardly begin to imagine) is then, for analytical convenience, projected down onto a one-dimensional spectrum from positive affective valence to negative affective valence.  This second simplifying step (which again, few would completely endorse, but many implicitly rely on), further contributes to undermining individuality, by compressing the complex landscape of human aesthetic values (see eg “The Nietzschean Challenge to Effective Altruism”, or simply introspect on your own motivations, values, and feelings), into a homogenous hedonic mush more amenable to mathematical reasoning.

Thus, in the standard version of total hedonic utilitarianism, there is no difference between individuals, no inherent notion of fundamental human rights or freedoms (perhaps instead you should content yourself with a kind of standard UBI of positively-valenced experience?), a kind of Rawlsian tendency towards communistic redistribution rather than traditional property-ownership and inequality, or of any distinguishing characteristics whatsoever.  It is only a simplifying assumption, of course; few would mistake it for ultimate reality.  But nevertheless, the structure of the model threatens to reach out (in the form of flawed / misinterpreted analyses) and begin to instantiate its troublesome biases in reality.

Hedonic utilitarianism subsumes the value of the human individual into the glory of the immortal leviathan

Counterintuitively, while the deaths of individuals become irrelevant, the survival of overall civilization (ie, of the state) becomes paramount in the longtermist framework.  This is the whole logic of mitigating x-risks -- the collective-life of human civilization in aggregate is the highest value.  Consequently, power and moral value is also agglomerated upwards, from the individual to the state, for whom individuals are like mere cells making up its immortal body.

A critic would point out that this subsumption of the individual to the needs of the state is the hallmark of totalitarian communism and fascism, and thus perhaps flags longtermism as intrinsically suspicious, despite its adherents’ good intentions and professed objective of maximizing human thriving.

An interesting illustration of this point might be the notably lukewarm attitude of longtermism towards the idea of slowing human aging.  It’s odd, considering that Bostom’s “Fable of the Dragon-Tyrant” is probably his most-read work (what other of his essays has lovingly-animated video adaptations with millions of views?), and Yudkowsky’s stridently anti-death HPMOR and Sequences were a formative experience for many longtermists.  Within “Essays on Longtermism”, Kevin Kuruc and David Manley’s Chapter 24 does a good job recounting the standard rationalist case for why the badness of death is underrated, and pairs this with some informative calculations about the large economic benefits that increased healthy lifespans would bring.  Yet anti-aging often seems to be nowhere on the wider longtermist or EA list of priorities -- nothing like it is mentioned on 80,000 Hours’s problem profiles.  One gets the sense that many longtermists are privately enthusiastic about the development of advanced medical technology as one of the most prized “longtermist goods”, alongside various other, even more exotic proposals for human enhancement.  But people don’t talk about this publicly.  

In his 1973 book The Denial of Death, anthropologist Ernest Becker claims that "human civilization is a defense mechanism against the knowledge of our mortality" and that people manage their "death anxiety" by pouring their efforts into an "immortal project" which "enables the individual to imagine at least some vestige of meaning continuing beyond their own lifespan".  In this secular modern age, when heroic cultural narratives and religious delusions no longer do the job, and when building literal giant pyramids in the desert for the glorification of the state has fallen out of style, what better an immortal project than "longtermism" with which to harness individuals' energy?  What else could provide better relief from men’s death-anxiety, than the promise of binding their mortal efforts to the sublime eternity of the far-distant galaxies?

Towards A More Human Longtermism?

Surely not everything can be totalitarian…

By now I have accused longtermism of being totalitarian in about twelve different ways.  But this is absurd; longtermism is one of the most intelligent and well-meaning movements out there, dear to my heart, repository of many of my hopes for a brighter future!  And, as noted earlier, some of these totalitarian vibes seem intrinsic to the notion of consequentialism, which in turn seems intrinsic to the notion of trying to do practically anything in life.  What is going on??

pictured: ideologies attempting to figure out which one of them isn’t secretly a totalitarian menace

Longtermism is indeed helpfully and appropriately wary of "lock-in", hopeful for a future of human flourishing, yet our frameworks nudge us toward assuming fragility of value, taking the perspective of abstract social controllers, and performing atomized analysis that disregards human individuality.

I think one problem (not just for longtermism, but for all of society) is that our notions of things like “empowerment”, “liberty”, “democractic-ness”, “legitimacy”, and so forth, etc, are just too confused and contradictory to sort this all out, similar to how AI safety researchers often complain that our concepts of “alignment” or “agency” are confused and ill-defined.

Some Confused Hopes for a Flourishing Human Future

Inspired by the analogous dilemma in AI, this suggests several possible approaches to dealing with this problem:

This is perhaps too fanciful a metaphor, but it is perhaps bearing in mind some sociological analogue to “embedded agency”?  That is, perhaps it would be wrong to think that a “strict longtermist state” could dispassionately shepard its package of “values” (x-risk mitigation, etc) into the future in a way disconnected from its functioning in the present.  Many of society’s most important values are tied up in the design & functioning of the state itself; the workings of government and other institutions, of culture and the economy, are both an expression of a society’s values and a method of ensuring the stability of those values.  Not to sound too hippie, but rather than imagining a “strict longtermist state” seizing control of the future and trying to white-knuckle its way toward utopian “lock-in”, an “embedded agency” perspective might help us conceptualize the path to utopia more a dynamic process of co-evolution and interaction between different societal forces.

Another potential reframing, especially when considering various proposed governance interventions (such as discussed in Chapters 2627, and 30) could be to move away from visions of political “control” over important outcomes (which seems almost self-defeating as a concept -- if the control is still in the realm of political, it is still under contention, thus not fully under control…), and instead seek something more like “developing social technology for removing certain issues from the realm of the political”.  For example:

As stated at the beginning of this essay, I also think it would be worthwhile for longtermism to be particularly on-guard about the potential downsides of a societal trend towards increasing stasis and safetyism.  Some longtermist interventions, including x-risk mitigation, are mostly “swimming with the tide” of increasing risk-aversion.  But we should be careful not to mistake causes that “swim against the tide” (with consequently reduced tractability, but increased neglectedness and perhaps also importance) as a knockdown argument against those causes.

As H Orri Stefansson writes in Chapter 28, “Longtermism and Social Risk-Taking”, longtermism magnifies the value-of-information that can be gathered from policy experiments, and this consideration (other things equal) ought to increase the amount of policy experimentation that society undertakes.  I think this vision, of pursuing value-of-information through experimentation, competition, diversity, and dynamism in society, offers a helpful lens with which to counterbalance the default, centralizing view of the social “planner” referenced so often in the chapters of “Essays on Longtermism”.  (Wherein something like a network of independent charter cities might be viewed suspiciously, almost as a potential “cancer” on the international community’s ability to coordinate.)

In addition to these broad cultural vibes, there are also likely a wide variety of direct, specific interventions against stable totalitarianism that longtermism could explore more thoroughly.  Attempts to limit the impact of AI-enabled propaganda, intelligence-gathering, and censorship all seem like they could be fertile ground for promising interventions.  Forward-looking attempts to map out the offense/defence balance of various specific totalitarianism-enabling technologies (perhaps super-persuasion AI, advanced lie detection technology, or MRI-based mind-reading tech), then try to establish norms and regulations or apply other d/acc strategies to them, could also be valuable.

 

  1. ^

    One could say that nonexistent people would like even a short existence, and this might balance out the preference of existing people for longer lifespans, but this view strikes me as a somewhat absurd perspective that few hold; most humans value future/potential generations but also recognize that actually-existing people have more of a sort of property-rights claim on existence than merely potential people.


SummaryBot @ 2025-10-21T21:12 (+2)

Executive summary: An exploratory, steelmanning critique argues that contemporary longtermism risks amplifying a broader cultural drift toward safetyism and centralized control, is skewed by a streetlight effect toward extinction-risk work, and—when paired with hedonic utilitarian framings—can devalue individual human agency; the author proposes a more empowerment-focused, experimentation-friendly, pluralistic longtermism that also treats stable totalitarianism and “flourishing futures” as first-class priorities.

Key points:

  1. Historical context & “cultural longtermism”: Longtermism is situated within a centuries-long rise in societal risk-aversion (post-WW2 liberalism, 1970s environmentalism/anti-nuclear). This tide brings real benefits but also stagnation risks that critics plausibly attribute to over-regulation and homogenizing global governance.
  2. Reconciling perceptions of power: Even if explicit longtermist budgets are small, the indirect, often unseen costs of safetyist policy—slower medical progress, blocked nuclear power, NIMBY housing constraints, tabooed research—create “invisible graveyards,” making a de facto “strict culturally-longtermist state” more feasible than analysts assume.
  3. Streetlight effect inside longtermism: Because extinction risks are unusually amenable to analysis and messaging, they crowd out harder-to-measure priorities—s-risks (e.g., stable totalitarianism), institutional quality, social technology, and positive-vision “flourishing futures”—potentially causing large path-dependent misallocations.
  4. Utilitarian framings and the individual: Widespread (often implicit) reliance on total hedonic utilitarianism dissolves the moral salience of unique persons into interchangeable “qualia-moments” while elevating the survival of civilization as a whole—fueling totalitarian vibes and explaining why deaths of individuals (e.g., aging) receive less emphasis than civilization-level x-risk.
  5. Risk of over-centralization: If longtermist x-risk agendas unintentionally bolster global regulation and control, they may increase the probability of totalitarian lock-in—the very kind of non-extinction catastrophe that longtermism underweights because it runs through messy socio-political channels.
  6. Toward a more humanistic longtermism: Prioritize empowerment, experimentation, and credible-neutral social technologies (e.g., prediction markets, algorithmic policy rules, liability schemes); invest in governance concepts that reduce politicization, expand policy VOI via pluralism (charter-city-like diversity), and explicitly target anti-totalitarian interventions (propaganda/censorship-resistance, offense-defense mapping for control-enabling tech).

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Jobst Heitzig (vodle.it) @ 2025-10-27T11:57 (+1)

I wonder how to correctly conceptualize the idea of "a net-negative influence on civilization" in view of the fact that the future is highly uncertain and that that uncertainty is a major motivating factor.

E.g., assume at some time point t1, a longtermist's proposed plan has higher expected longterm value than an alternative plan because the alternative plan takes a major risk. The longtermist's plan is realized and at some later time point t2 someone points out that the alternative plan would have produced more value between t1 and t2 (tacitly assuming the risk not realizing between t1 and t2 because the realized longterm plan has successfully avoided it).

Would that constitute an example of what these critics would call a "net-negative influence on civilization"? If so, it's just a fallacy. If not, then what comparison exactly is meant?

More generally: How to plausibly construct a "counterfactual" world in view of large uncertainties? It seems the only valid comparison would not be between the one realization that actually emerged from a certain behavior and one (potentially overly optimistic) realization that might have emerged from an alternative behavior, but between whole ensembles of realizations. This goes similarly for the effects of drug regulation, workplace laws, historic technology bans etc.