Epistemics (Part 10: The nontrivial probability gambit) | Reflective Altruism

By Unofficial Reflective Altruism Cross-Poster @ 2025-12-26T18:29 (+34)

This is a linkpost to https://reflectivealtruism.com/2025/12/26/epistemics-part-10-the-nontrivial-probability-gambit/

Miniature dishware.

Epistemics (Part 10: The nontrivial probability gambit)

Even just a 1% chance of extremely high stakes is sufficient to establish high stakes in expectation. So we should not feel assured of low stakes even if a highly credible model—warranting 99% credence—entails low stakes. It hardly matters at all how many credible models entail low stakes. What matters is whether any credible model entails extremely high stakes. If one does—while warranting just 1% credence—then we have established high stakes in expectation, no matter what the remaining 99% of credibility-weighted models imply (unless one inverts the high stakes in a way that cancels out the other high-stakes possibility).

Richard Yetter-Chappell, “Rule high stakes in, not out

Listen to this post

1. Introduction

This is Part 10 in my series on epistemics: practices that shape knowledge, belief and opinion within a community. In this series, I focus on areas where community epistemics could be productively improved.

Part 1 introduced the series and briefly discussed the role of funding, publication practices, expertise and deference within the effective altruist ecosystem.

Part 2 discussed the role of examples within discourse by effective altruists, focusing on the cases of Aum Shinrikyo and the Biological Weapons Convention.

Part 3 looked at the role of peer review within the effective altruism movement.

Part 4 looked at the declining role of cost-effectiveness analysis within the effective altruism movement. Part 5 continued that discussion by explaining the value of cost-effectiveness analysis.

Part 6 looked at instances of extraordinary claims being made on the basis of less than extraordinary evidence.

Part 7 looked at the role of legitimate authority within the effective altruism movement.

Part 8 looked at two types of decoupling.

Part 9 looked at ironically authentic speech.

Today’s post looks at the nontrivial probability gambit, a strategy for responding to criticism of strong views about the shape of the future.

2. The nontrivial probability gambit

One of the themes of my work has been that the case for longtermism rests on a number of highly nontrivial claims about the long-term future. These include the time of perils hypothesis and claims that threats such as artificial intelligence and biosecurity pose a significant near-term existential risk which can be tractably reduced.

In each case, I have argued that:

  1. (Antecedent Implausibility) The claim in question is not very antecedently plausible.
  2. (Insufficient Evidence) Insufficient evidence has been offered to support the claim in question.

The upshot of Antecedent Implausibility is that we should assign low prior credence to the questioned claims. The upshot of Insufficient Evidence is that we should not be significantly moved from this prior by existing arguments.

What I would like to see is extended and rigorous argument for the questioned claims. Those arguments, if successful, would target Insufficient Evidence, and related arguments could perhaps be made against Antecedent Implausibility.

Sometimes this is done, but often longtermists try another tack. The nontrivial probability gambit does not (directly) contest Antecedent Implausibility or Insufficient Evidence. Rather, it holds that the questioned claims should be assigned nontrivial probability, and that assigning them nontrivial probability is sufficient to vindicate the case for longtermism.

For a few recent examples, here is Richard Yetter-Chappell:

[Thorstad] calls the arguments for the time of perils hypothesis “inconclusive”. But either way, the time of perils hypothesis can (and should) rationally shape our expected value judgments without needing to be conclusively established or even probable. Warranting some non-negligible credence would suffice. Because, again, even just a 1% chance of extremely high stakes establishes high stakes in expectation … To rule out high stakes, you need to establish that the most longtermist-friendly scenario or model is not just unlikely, but vanishingly so.

And here is the blogger Bentham’s Bulldog:

The expected value of existential risk reduction is—if not infinite, which I think it clearly is in expectation—extremely massive. If you think the Bostrom number of 10^52 happy people has a .01% chance of being right, then you’ll get 10^48 expected future people if we don’t go extinct, meaning reducing odds of existential risks by 1/10^20 creates 10^28 extra lives. So even if we think the Thorstad math means that getting the odds of going extinct this century down 1% matters 100 times less, it still easily swamps short-term interventions in expectation.

In each case, nontrivial probability is assigned to very high values for existential risk mitigation. Importantly, this is not done on the basis of substantial new argument for the challenged claims. For example, here is Bentham’s Bulldog on the case for assigning nontrivial probability to very large future populations:

There is some chance that the far future could contain stupidly large numbers of people. For instance, maybe we come up with some system that produces exponential growth with respect to happy minds relative to resources input. So, as you increase the amount of energy by some constant amount, you double the number of minds. I wouldn’t bet on such a scenario, but it’s not impossible. And if the odds are 1 in a trillion of such a scenario, then this clearly gets expected value much higher than the 10^52 number. Such a scenario potentially opens up numbers of happy minds like 2^1 quadrillion. There’s also some chance we’ll discover ways to bring about infinite happy minds—if the odds of this are non-zero, the expected number of future happy minds is infinity.

And here is Yetter-Chappell on the time of perils:

We only need one credible model entailing extremely high stakes in order to establish high stakes in expectation. And “credible” here does not even require high credence … The time of perils hypothesis can (and should) rationally shape our expected value judgments without needing to be conclusively established or even probable. Warranting some non-negligible credence would suffice. Because, again, even just a 1% chance of extremely high stakes establishes high stakes in expectation.

What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.

I don’t think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities. Let me say a bit about why this is so.

3. The naming game

One of the best-known facts about high-stakes, low-probability claims is that we can almost always name more of them. Call this the naming game.

Perhaps the most familiar example of the naming game is the many gods objection to Pascal’s Wager. Pascal’s Wager says that you should assign nonzero probability to the existence of a God who will infinitely reward you for your faith. On this basis, it is argued, you should believe (or get yourself to believe) that God exists.

The many gods objection notes that we might equally well name hypotheses on which you will be infinitely punished for your faith. Perhaps there are two possible gods, but only one exists. Each will damn you for eternity if you believe in the other. Or perhaps there is only one god, but they find it amusing to send believers to hell and sinners to heaven. Or perhaps God punishes believers who aren’t named Carol (or maybe it was Darryl?). And, the objection goes, you should assign some nontrivial probability to each of these claims.

What is the right way to respond to the many gods objection? Not, I take it, by seeing who can name more or stronger low-probability claims to support their favored conclusion and then tallying up the claims named by each party. That is a never-ending game of objection-naming. The right way to respond to the many gods objection, if such a response exists, will probably have something to do with the relative likelihoods of each claim. (Matters are more complicated if the claimed values are genuinely infinite, but let us leave those complications aside for now).

The point is that we can play the naming game for any number of hypotheses, such as the time of perils hypothesis. Consider, for example, the time of carols hypothesis on which everyone in the future will be tied up and forced to listen to endless Christmas carols. Or consider the time of Carol hypothesis on which a dictator named Carol will torture all living beings for a very long time.

The right response to the time of carols hypothesis, or the time of Carol hypothesis, would not be to name competing hypotheses about benevolent Darryls or barrels of Christmas cheer. The right response would be to argue that both claims are implausible (and indeed they are).

The point raised by the naming game is that there is no way to escape substantive argument about the comparative plausibility of competing claims about how the future might go. Just because a claim would, if true, make the future very good or very bad is not yet a reason to think in expectation that the future will be very good or very bad.

Once we move beyond the nontrivial probability gambit to engage in substantive argument, it is not obvious that claims such as the time of perils hypothesis will carry the day.

4. Very low probabilities are ubiquitous

Longtermists correctly note that the value of future scenarios can be very high. While there are on the order of 10^10 humans alive today, there could be 10^30, 10^40 or 10^50 future people. These are very large numbers, and their size matters.

What longtermists do not always note is that the probabilities of future scenarios can be very low. Often the nontrivial probability gambit invites us to assign quite substantial probabilities to very strong claims. For example, Chappell writes that:

Even just a 1% chance of extremely high stakes is sufficient to establish high stakes in expectation.

But just as the value of future scenarios can be extremely high, their probabilities can be very low. I wouldn’t assign a 1% chance to the time of carols hypothesis. I probably wouldn’t bat an eye at assigning a probability beneath 10^(-100) to it. This is because the time of carols hypothesis is antecedently implausible, and nobody has ever offered enough evidence to substantially raise its probability.

Consider, now, a claim like the time of perils hypothesis. Some versions of this claim may be relatively more plausible. But the versions of the time of perils hypothesis underlying very high value estimates often make claims like the following.

First, levels of existential risk are right now startlingly high, for example 10-20% in this century.

Second, in a few short centuries, levels of existential risk will drop quickly and dramatically.

Third, this drop will be perhaps 4-5 orders of magnitude in levels of per-century risk.

Four, levels of existential risk will remain low (with no exceptions) for a very long time, such as a million or a billion years.

This, in turn, is coupled with ambitious hypotheses about the level of population and welfare growth possible within a time of perils scenario, and with comparatively low probability assignments to bad outcomes.

It is not at all obvious that we should assign a probability in the neighborhood of 1% to the conjunction of these claims. Nor is it obvious that this probability should be in the neighborhood of 10^-5 or 10^-15.

The reason for this is that low probabilities are ubiquitous. If we look at the conceptual space of strong claims about the long-term future, there are countless competing claims that could be made. Most, like the time of carols hypothesis, must be assigned very low probabilities as a matter of mathematical necessity, since competing hypotheses cannot be true together.

If we are going to assign nontrivial probabilities to strong claims, or especially to the conjunction of many strong claims, we need to make an argument for this probability assignment. The default probability assignment to such claims is not nontrivial. It is trivial.

5. Shedding zeroes

My book manuscript Beyond longtermism pursues what I call the shedding zeroes strategy. This strategy begins with the longtermist claim that the best longtermist interventions are many orders of magnitude better than competing interventions. It then develops an overlapping series of challenges, each of which aims to shed orders of magnitude from the value of longtermist options.

The point of the shedding zeroes strategy is this. Very few positions admit of one-shot refutations. Generally, there are many strengths and weaknesses of a view. But even if one individual challenge to longtermism is not enough to scuttle the view, many such challenges strung together might well do so.

Longtermists do not just use the nontrivial probability gambit to defend a single claim. They use it many times, for example in response to decision-theoretic uncertainty (over fanaticism or risk-aversion) and in response to moral uncertainty (over competing nonconsequentialist duties).

For example, just three days after invoking the nontrivial probability gambit in response to my work on the time of perils hypothesis, high population estimates, and other quantities (note: this already involves several invocations of the nontrivial probability gambit), the blogger Bentham’s Bulldog considers the case for fanaticism. In a post entitled “Fanaticism dominates given moral uncertainty,” he again pulls the nontrivial probability gambit.

Under uncertainty, fanatical considerations dominate. If you’re not sure if fanaticism is right, you should mostly behave as a fanatic.

The nontrivial probability gambit is not an infinitely-repeatable get-out-of-jail free card. It is a very expensive card to play. Playing it repeatedly rapidly drives down the value of longtermist interventions. Orders of magnitude are precious things, and even the most optimistic longtermist value estimates have only so many orders of magnitude to shed.

Can longtermists make the nontrivial gambit once? Perhaps. It depends on the numbers. Can they make it a dozen times? That is unlikely. Twice in a week? That’s how you go bankrupt.

6. Beyond expected value: Fanaticism and stakes-sensitivity

The costs of the nontrivial probability gambit can be heightened if we move beyond expected value theory.

For example, one thing that effective altruists often note is that even if fanaticism is true, the threshold below which probabilities should be discounted might be very low. While the Marquis de Condorcet recommended discounting probabilities around 10^-5 and Borel recommended discounting at 10^-6, a recent defense of fanaticism by Bradley Monton adopts a discounting threshold of 5 * 10^-16.

As longtermists rightly note, longtermist interventions may well have a probability substantially above 5 * 10^-16 of success. That is particularly true if they are understood collectively, rather than assessing each individual donation for its chance of preventing existential catastrophe.

As a result, the bare invocation of anti-fanaticism may not be enough to scuttle longtermism. Here, for example, are Hilary Greaves and Will MacAskill:

The probabilities involved in the argument for longtermism might not be sufficiently extreme for any plausible degree of resistance to ‘fanaticism’ to overturn the verdicts of an expected-value approach, at least at the societal level. For example, it would not seem ‘fanatical’ to take action to reduce a 1 in 1 million risk of dying, as one incurs from cycling 35 miles or driving 500 miles (respectively, by wearing a helmet or wearing a seat belt (Department of Transport 2020)). But it seems that society can positively affect the very long-term future with probabilities well above this threshold. For instance … we suggested a lower bound of 1 in 100,000 on a plausible credence that $1 billion of carefully targeted spending would avert an existential catastrophe from artificial intelligence.

This reply may be more plausible when the only source of uncertainty is ordinary empirical uncertainty. But when ordinary empirical uncertainty is coupled with many quite radical empirical claims (such as the time of perils hypothesis, and high levels of near-term existential risk) and also uncertain philosophical claims (such as the correct decision theory or deontic theory), levels of uncertainty in many of the longtermist’s best-case scenarios can easily dip below even high thresholds such as 5 * 10^-16. As such, pulling the nontrivial probability gambit many times makes it harder to square anti-fanaticism with longtermism.

A similar point occurs in response to deontic objections, which cite competing duties beyond duties of beneficence towards future people. Again, following Greaves and MacAskill, a standard strategy is to make the stakes-sensitivity argument:

(P1) When the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor, one ought to choose a near-best option.

(P2) In the most important decision situations facing agents today, the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor.

(C) So, in the most important decision situations facing agents today, one ought to choose a near-best option.

On the standard ex ante reading of the stakes-sensitivity argument, (P1) relies on the claim that the expected value of longtermist interventions is not merely better, but much, much better than that of competing interventions.

This way of arguing for longtermism allows fewer uses of the nontrivial probability gambit, because we need to show not just that longtermist interventions continue to be better than competitors given uncertainty, but that they continue to be much better. Again, danger lurks.

The lesson of both examples is that the nontrivial probability gambit admits fewer uses in many use cases meant to address competing normative views.

7. Evidence

This post explored the nontrivial probability gambit. Many claims, such as the time of perils hypothesis, have been claimed to satisfy:

  1. (Antecedent Implausibility) The claim in question is not very antecedently plausible.
  2. (Insufficient Evidence) Insufficient evidence has been offered to support the claim in question.

In defense, some longtermists pull the nontrivial probability gambit. They do not question Antecedent Implausibility or Insufficient Evidence, but rather argue that any nontrivial probability assignment to the hypotheses in question is enough to vindicate longtermism.

We saw that the nontrivial probability gambit faces challenges.

We saw in Section 3 that some degree of evidence is necessary, or else we are merely playing the naming game of naming scenarios in which a purported action would be very good, or very bad.

We saw in Section 4 that low probabilities are ubiquitous. It is not at all surprising to assign very low probabilities to strong, implausible and insufficiently evidenced claims about the long-term future. Most claims of this form must, as a matter of mathematical necessity, be given very low probabilities.

We saw in Section 5 that even if the low-probability gambit works once, it cannot be repeated many times without great cost. And we saw in Section 6 that the number of permissible repetitions drops further in many use cases.

What, then, would I have longtermists do in place of the low probability gambit? The answer is simple. I would like to see more and better direct arguments for the challenged claims, on the basis of which it can be seen to be appropriate to assign them nontrivial probabilities.


Note: This essay is cross-posted from the blog Reflective Altruism written by David Thorstad. It was originally published there on December 26, 2025. The account making this post has no affiliation with Reflective Altruism or David Thorstad. You can leave a comment on this post on Reflective Altruism here. You can read the rest of the post series here.


Richard Y Chappell🔸 @ 2025-12-29T17:25 (+11)

On (what I take to be) the key substantive claim of the post:

I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.

There seems room for people to disagree on priors about which claims are "strong and antecedently implausible". For example, I think Carl Shulman offers a reasonably plausible case for existential stability if we survive the next few centuries. By contrast, I find a lot of David's apparent assumptions about which propositions warrant negligible credence to be extremely strong and antecedently implausible. As I wrote in x-risk agnosticism:

David Thorstad seems to assume that interstellar colonization could not possibly happen within the next two millennia. This strikes me as a massive failure to properly account for model uncertainty. I can’t imagine being so confident about our technological limitations even a few centuries from now, let alone millennia. He also holds the suggestion that superintelligent AI might radically improve safety to be “gag-inducingly counterintuitive”, which again just seems a failure of imagination. You don’t have to find it the most likely possibility in order to appreciate the possibility as worth including in your range of models.

I think it's important to recognize that reasonable people can disagree about what they find antecedently plausible or implausible, and to what extent. (Also: some events—like your home burning down in a fire—may be "implausible" in the sense that you don't regard them as outright likely to happen, while still regarding them as sufficiently probable as to be worth insuring against.)

Such disagreements may be hard to resolve. One can't simply assume that one's own priors are objectively justified by default whereas one's interlocutor is necessarily unjustified by default until "supported by extensive argument". That's just stacking the deck.

I think a healthier dialectical approach involves stepping back to more neutral ground, and recognizing that if you want to persuade someone who disagrees with you, you will need to offer them some argument to change their mind. Of course, it's fine to just report one's difference in view. But insisting, "You must agree with my priors unless you can provide extensive argument to support a different view, otherwise I'll accuse you of bad epistemics!" is not really a reasonable dialectical stance.

If the suggestion is instead that one shouldn't attempt to assign probabilities at all then I think this gets into the problems I explore in Good Judgment with Numbers and (especially) Refusing to Quantify is Refusing to Think, that it effectively implies giving zero weight. But we can often be in a position to know that a non-zero (and indeed non-trivially positive) estimate is better than zero, even if we can't be highly confident of precisely what the ideal estimate would be.

Richard Y Chappell🔸 @ 2025-12-26T19:43 (+9)

This sort of "many gods"-style response is precisely what I was referring to with my parenthetical: "unless one inverts the high stakes in a way that cancels out the other high-stakes possibility."

I don't think that dystopian "time of carols" scenarios are remotely as credible as the time of perils hypothesis. If someone disagrees, then certainly resolving that substantive disagreement would be important for making dialectical progress on the question of whether x-risk mitigation is worthwhile or not.

What makes both arguments instances of the nontrivial probability gambit is that they do not provide significant new evidence for the challenged claims. Their primary argumentative move is to assign nontrivial probabilities without substantial new evidence.

I don’t think this is a good way to argue. I think that nontrivial probability assignments to strong and antecedently implausible claims should be supported by extensive argument rather than manufactured probabilities.

I'd encourage Thorstad to read my post more carefully and pay attention to what I am arguing there. I was making an in principle point about how expected value works, highlighting a logical fallacy in Thorstad's published work on this topic. (Nothing in the paper I responded to seemed to acknowledge that a 1% chance of the time of perils would suffice to support longtermism. He wrote about the hypothesis being "inconclusive" as if that sufficed to rule it out, and I think it's important to recognize that this is bad reasoning on his part.)

Saying that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence" is poor reading comprehension on Thorstad's part. Actually, my primary argumentative move was explaining how expected value works. The numbers are illustrative, and suffice for anyone who happens to share my priors (or something close enough). Obviously, I'm not in that post trying to persuade someone who instead thinks the correct probability to assign is negligible. Thorstad is just radically misreading what my post is arguing.

(What makes this especially strange is that, iirc, the published paper of Thorstad's to which I was replying did not itself argue that the correct probability to assign to the ToP hypothesis is negligible, but just that the case for the hypothesis is "inconclusive". So it sounds like he's now accusing me of poor epistemics because I failed to respond to a different paper than the one he actually wrote? Geez.)

David T @ 2025-12-28T13:14 (+25)

Seems like you and the other David T are talking past each other tbh.

Above you reasonably argue the [facetious] "time of carols" hypothesis is not remotely as credible as the time of perils hypothesis. But you also don't assign a specific credence to it, or provide an argument that the "time of carols" is impossible or even <1%[1]

I don't think it would be fair to conclude from this that you don't understand how probability works, and I also don't think that it is reasonable to assume that the probability of the 'time of carols' should be assumed sufficiently nontrivial to warrant action in the absence of any specific credence attached to it. Indeed if someone responded to you indirectly with an example which assigned a prior of "just 1%", to the "time of carols",  you might feel justified in assuming it was them misunderstanding probability...

The rest of Thorstad's post which doesn't seem to be specifically targeted at you explicitly argues that in practice, specific claims involving navigating a 'time of perils' also fall into the "trivial" category,[2] in the absence of robust argument as to why of all the possible futures this one is less trivial than others. He's not arguing for "many gods" which invert the stakes so much as "many gods/pantheons means the possibility of any specific god is trivial, in the absence of compelling evidence of said god's relative likelihood". He also doesn't bring any evidence to the table (other than arguing that time of perils hypothesis involves claims about x-risk in different centuries which might be best understood as independent claims [3]) but his position is that this shouldn't be the sceptic's job...

(Personally I'm not sure what said evidence would even look like, but for related reasons I'm not writing papers on longtermism and am happy applying a very high discount rate to the far future)    

  1. ^

    I think everyone would agree that it is absurd (that's a problem with facetious examples)[4] but if the default is that logical possibilities are considered nontrivial until proven otherwise...

  2. ^

    he doesn't state a personal threshold, but does imply many longtermist propositions dip below Monton's 5 * 10^-16 once you start adding up the claims....

  3. ^

    a more significant claim he fails to emphasize is that the relevant criteria for longtermist interventions isn't so much that the baseline hypothesis about peril distribution is [incidentally] true but the impact of a specific intervention at the margin has a sustained positive influence on  it. 

  4. ^

    I tend to dislike facetious examples, but hey, this is a literature in which people talk about paperclip maximisers and try to understand AI moral reasoning capacity by asking LLMs variations on trolley problems...

David Mathers🔸 @ 2025-12-28T19:03 (+10)

I am far from sure that Thorstad is wrong that time of perils should be assigned ultra-low probability. (I do suspect he is wrong, but this stuff is extremely hard to assess.) But in my view there are multiple pretty obvious reasons why "time of Carols" is a poor analogy to "time of perils":

  1. "Time of carols" is just way more specific, in a bad way than time of perils. I know that there are indefinitely many ways time of carols could happen if you get really fine-grained, but it nonetheless, intuitively, there is in some sense way more significantly different paths "X-risk could briefly be high then very low" than "everyone is physically tied up and made to listen to carols". To me it's like comparing "there will be cars on Mars in 2120" to "there will be a humanoid crate-stacking robot on Mars in 2120 that  is nicknamed Carol".
  2. Actually, longtermists argue for the "current X-risk is high" claim, making Thorstad's point that lots of things should get ultra-low prior probability is not particularly relevant to that half of the time of perils hypothesis. In comparison, no one argues for time of carols. 
  3. (Most important disanalogy in my view.) The second half of time of perils, that x-risk will go very low for a long-time, is plausibly something that many people will consider desirable, and might therefore aim for. People are even more likely to aim for related goals like "not have massive disasters while I am alive." This is plausibly a pretty stable feature of human motivation that has a fair chance of lasting millions of years; humans generally don't want humans to die. In comparison there's little reason to think decent numbers of people will always desire time of carols.

    4. Maybe this isn't  an independent point from 1., but I actually do think it is relevant that "time of carols" just seems very silly to everyone as soon as they hear it, and time of perils does not. I think we should give some weight to people's gut reactions here. 

Richard Y Chappell🔸 @ 2025-12-28T17:44 (+4)

The meta-dispute here isn't the most important thing in the world, but for clarity's sake, I think it's worth distinguishing the following questions:

  1. Does a specific text—Thorstad (2022)—either actually or apparently commit a kind of "best model fallacy", arguing as though establishing Time of Perils hypothesis as unlikely to be true thereby suffices to undermine longtermism?
  2. Does another specific text—my 'Rule High Stakes In, Not Out'—either actually or apparently have as its "primary argumentative move... to assign nontrivial probabilities without substantial new evidence"?

My linked post suggests that the answer to Q1 is "Yes". I find it weird that others in the comments here are taking stands on this textual dispute a priori, rather than by engaging with the specifics of the text in question, the quotes I respond to, etc.

My primary complaint in this comment thread has simply been that the answer to Q2 is "No" (if you read my post, you'll see that it's instead warning against what I'm now calling the "best model fallacy", and explaining how I think various other writings—including Thorstad's—seem to go awry as a result of not attending to this subtle point about model uncertainty). The point of my post is not to try to assert or argue for any particular probability assignment. Hence Thorstad's current blog post misrepresents mine.

***

There's a more substantial issue in the background:

Q3. What is the most reasonable prior probability estimate to assign to the time of perils hypothesis? In case of disagreement, does one party bear a special "burden of proof" to convince the other, who should otherwise be regarded as better justified by default?

I have some general opinions about the probability being non-negligible—I think Carl Shulman makes a good case here—but it's not something I'm trying to argue about with those who regard it as negligible. I don't feel like I have anything distinctive to contribute on that question at this time, and prefer to focus my arguments on more tractable points (like the point I was making about the best model fallacy). I independently think Thorstad is wrong about how the burden of proof applies, but that's an argument for another day.

So I agree that there is some "talking past" happening here. Specifically, Thorstad seems to have read my post as addressing a different question (and advancing a different argument) than what it actually does, and made unwarranted epistemic charges on that basis. If anyone thinks my 'Rule High Stakes In' post similarly misrepresents Thorstad (2022), they're welcome to make the case in the comments to that post.

David Thorstad @ 2025-12-29T15:23 (+8)

Thanks Richard!


Writing is done for an audience. Effective altruists have a very particular practice of stating their personal credences in the hypotheses that they discuss. While this is not my practice, in writing for effective altruists I try to be as precise as I can about the relative plausibility that I assign to various hypotheses and the effect that this might have on their expected value.

When writing for academic audiences, I do not discuss uncertainty unless I have something to add which my audience will find to be novel and adequately supported.  

I don’t remind academic readers that uncertainty matters, because all of them know that on many moral theories uncertainty matters and many (but not all) accept such theories. I don’t remind academic readers of how uncertainty matters on some popular approaches, such as expected value theory, because all of my readers know this and many (but fewer) accept such theories. The most likely result of invoking expected value theory would be to provoke protests that I am situating my argument within a framework which some of my readers do not accept, and that would be a distraction. 

I don’t state my personal probability assignments to claims such as the time of perils hypothesis because I don’t take myself to have given adequate grounds for a probability assignment. Readers would rightly object that my subjective probability assignments had not been adequately supported by the arguments in the paper, and I would be forced to remove them by referees, if the paper were not rejected out of hand.

For the same reason, I don’t use language forcing my personal probability assignments on readers. There are always more arguments to consider, and readers differ quite dramatically in their priors. For that reason, concluding a paper with the conclusion that a claim like the time of perils hypothesis has a probability on the order of 10^(-100) or 10^(-200) would again, rightly provoke the objection that this claim has not been adequately supported. 

When I write, for example, that arguments for the time of perils hypothesis are inconclusive, my intention is to allow readers to make up their own minds as to precisely how poorly those arguments fare and what the resulting probability assignments should be. Academic readers very much dislike being told what to think, and they don’t care a whit for what I think.

As a data point, almost all of my readers are substantially less confident in many of the claims that I criticize than I am. The most common reason why my papers criticizing effective altruism are rejected from leading journals is that referees or editors take the positions criticized to be so poor that they do not warrant comment. (For example, my paper on power-seeking theorems was rejected from BJPS by an editor who wrote, “The arguments critically evaluated in the paper are just all over the place, verging from silly napkin-math, to speculative metaphysics, to formal explorations of reinforcement learning agents. A small minority of that could be considered philosophy of computer science, but the rest of it, in my view, is computer scientists verging into bad philosophy of mind and futurism … The targets of this criticism definitely want to pretend they're doing science; I worry that publishing a critical takedown of these arguments could lend legitimacy to that appearance.”)

Against this background, there is not much pressure to remind readers that the positions in question could be highly improbable. Most think this already, and the only thing I am likely to do is to provoke quick rejections like the above, or to annoy the inevitable referee (an outlier among my readers) selected for their sympathies with the position being criticized.

To tell the truth, I often try to be even more noncommittal in the language of my papers than the published version would suggest. For example, the submitted draft of “Mistakes in the moral mathematics of existential risk” said in the introduction that “under many assumptions, once these mistakes are corrected, the value of existential risk mitigation will be far from astronomical.” A referee complained that this was not strong enough, because (on their view) the only assumptions worth considering were those on which the value of existential risk mitigation is rendered extremely minimal. So I changed the wording to “Under many assumptions, once these mistakes are corrected, short-termist interventions will be more valuable than long-termist interventions, even within models proposed by leading effective altruists.” Why did I discuss these assumptions, instead of a broader class of assumptions under which the value of existential risk mitigation is merely non-astronomical? Because that’s what my audience wanted to talk about. 

In general, I would encourage you to focus in your writing on the substantive descriptive and normative issues that divide you from your opponents. Anyone worth talking to understands how uncertainty works. The most interesting divisions are not elementary mistakes about multiplication, but substantive questions about probabilities, utilities, decision theories, and the like. You will make more significant contributions to the state of the discussion if you focus on identifying the most important claims that in fact divide you from your opponents and on giving extended arguments for those claims.   

To invent and claim to resolve disagreements based on elementary fallacies is likely to have the effect of pushing away the few philosophers still genuinely willing to have substantive normative and descriptive conversations with effective altruists. We are not enthusiastic about trivialities. 

David Mathers🔸 @ 2025-12-29T16:15 (+6)

To be fair to Richard, there is a difference between a) stating your own personal probability in time of perils and b) making clear that for long-termist arguments to fail solely because they rely on time of perils, you need it to have  extremely low probability, not just low, at least if you accept the expected value theory and subjective probability estimates can legitimately be applied at all here, as you seemed to be doing for the sake of making an internal critique. I took it to be the latter that Richard was complaining your paper doesn't do. 

How strong do you think your evidence is for most readers of philosophy papers think the claim that X-risk is currently high, but will go permanently very low" is extremely implausible? If you asked me to guess I'd say most people's reaction would be more like "I've no idea how plausible this is, other than definitely quite unlikely", which is very different, but I have no experience with reviewers here. 

I am a bit -not necessarily entirely-skeptical of the "everyone really knows EA work outside development and animal welfare is trash" vibe of your post. I don't doubt a lot of people do think that in professional philosophy. But at the same time, Nick Bostrom is more highly cited than virtually any reviewer you will have encountered. Long-termist moral philosophy turns up in leading journals constantly. One of the people you critiqued in your very good paper attacking arguments for the singularity is Dave Chalmers, and you literally don't get more professionally distinguished in analytic philosophy than Dave. Your stuff criticizing long-termism seems to have made it into top journals too when I checked, which indicates there certainly are people who think it is not too silly to be worth refuting: https://www.dthorstad.com/papers

Richard Y Chappell🔸 @ 2025-12-29T16:08 (+4)

Hi David, I'm afraid you might have gotten caught up in a tangent here! The main point of my comment was that your post criticizes me on the basis of a misrepresentation. You claim that my "primary argumentative move is to assign nontrivial probabilities without substantial new evidence," but actually that's false. That's just not what my blog post was about.

In retrospect, I think my attempt to briefly summarize what my post was about was too breezy, and misled many into thinking that its point was trivial. But it really isn't. (In fact, I'd say that my core point there about taking higher-order uncertainty into account is far more substantial and widely neglected than the "naming game" fallacy that you discuss in the present post!) I mention in another comment how it applied to Schwitzgebel's "negligibility argument" against longtermism, for example, where he very explicitly relies on a single constant probability model in order to make his case. Failing to adequately take model uncertainty into account is a subtle and easily-overlooked mistake!

A lot of your comment here seems to misunderstand my criticism of your earlier paper. I'm not objecting that you failed to share your personal probabilities. I'm objecting that your paper gives the impression that longtermism is undermined so long as the time of perils hypothesis is judged to be likely false. But actually the key question is whether its probability is negligible. Your paper fails to make clear what the key question to assess is, and the point of my 'Rule High Stakes In' post is to explain why it's really the question of negligibility that matters.

To keep discussions clean and clear, I'd prefer to continue discussion of my other post over on that post rather than here. Again, my objection to this post is simply that it misrepresented me.

David Mathers🔸 @ 2025-12-26T20:33 (+3)

Obviously David, as a highly trained moral philosopher with years of engagement with EA understands how expected value works though. I think the dispute must really be about whether to assign time of perils very low credence. (A dispute where I would probably side with you if "very low" is below say 1 in 10,000). 

Richard Y Chappell🔸 @ 2025-12-26T21:25 (+9)

There's "understanding" in the weak sense of having the info tokened in a belief-box somewhere, and then there's understanding in the sense of never falling for tempting-but-fallacious inferences like those I discuss in my post.

Have you read the paper I was responding to? I really don't think it's at all "obvious" that all "highly trained moral philosophers" have internalized the point I make in my blog post (that was the whole point of my writing it!), and I offered textual support. For example, Thorstad wrote: "the time of perils hypothesis is probably false. I conclude that existential risk pessimism may tell against the overwhelming importance of existential risk mitigation." This is a strange thing to write if he recognized that merely being "probably false" doesn't suffice to threaten the longtermist argument! 

(Edited to add: the obvious reading is that he's making precisely the sort of "best model fallacy" that I critique in my post: assessing which empirical model we should regard as true, and then determining expected value on the basis of that one model. Even very senior philosophers, like Eric Schwitzgebel, have made the same mistake.)

Going back to the OP's claims about what is or isn't "a good way to argue," I think it's important to pay attention to the actual text of what someone wrote. That's what my blog post did, and it's annoying to be subject to criticism (and now downvoting) from people who aren't willing to extend the same basic courtesy to me.

David Mathers🔸 @ 2025-12-29T10:15 (+4)

Fair point, when I re-checked the paper, it doesn't clearly and explicitly display knowledge of the point you are making. I still highly doubt that Thorstad really misunderstands it though. I think he was probably just not being super-careful. 

TFD @ 2025-12-30T13:44 (+1)

This sort of "many gods"-style response is precisely what I was referring to with my parenthetical: "unless one inverts the high stakes in a way that cancels out the other high-stakes possibility."

I think you are making some unstated assumptions that it would be helpful to make explicit. You say your argument is basically just explaining how expected values work, but it doesn't seem like that is true to me, I think you need to make some assumptions unrelated to how expected values work for your argument to go through.

If I were to cast your argument in the language of "how expected values work" it would go like this:

An expected value is the the sum of a bunch of terms that involve multiplying an outcome by its probability, so of the form x * p, where x is the outcome (usually represented by some some number) and p is the probability associated with that outcome. To get the EV we take terms like that representing every possible outcome and add them up.

Because these terms have two parts, the term as a whole can be large even if the probability is small. So, the overall EV can be driven primarily by a small probability of a large positive outcome because it is dominated by this one large term, which is large even when the probability is small. We rule high stakes in, not out.

The problem is that this argument doesn't work without further assumptions. In my version I said "can be driven". I think your conclusion requires "is driven", which doesn't follow. Because there are other terms in the EV calculation their sum could be negative and of sufficient magnitude that the overall EV is low or negative even if one term is large and positive. This doesn't require that any particular term in the sum has any particular relationship to the large positive term such that it is "inverting" that term, although that would be sufficient, it isn't the only way for the overall EV to be small/negative. Their could be a mix of moderate negative terms that adds up to enough to reduce the overall EV. Nothing about this seems weird or controversial to me. For example, a standard normal distribution has large positive values with small probabilities but has an expectation of zero.

I think you need to be more explicit about the assumptions you are making that result in your desired conclusion. In my view, part of the point of Thorstad's "many gods" response is that it demonstrates that once we start picking apart these assumptions we essentially collapse back to having the model the entire space of possibilities. That is suggested by what you say here:

I don't think that dystopian "time of carols" scenarios are remotely as credible as the time of perils hypothesis.

The issues isn't that the "time of carols" is super plausible, its that if your response is to include it as a term in the EV and argue the sum is still positive, then it seems like your original argument kind of collapses. We are no longer "ruling stakes in". We now also have to actually add in all those other terms as well before we can know the final result.

I could imagine there are assumptions that might make your argument go through, but I think you need to make them explicit and argument for them, rather than claiming your conclusion follows from "how expected value works".

Richard Y Chappell🔸 @ 2025-12-30T15:28 (+2)

The responses to my comment have provided a real object lesson to me about how a rough throwaway remark (in this case: my attempt to very briefly indicate what my other post was about) can badly distract readers from one's actual point! Perhaps I would have done better to entirely leave out any positive attempt to here describe the content of my other post, and merely offer the negative claim that it wasn't about asserting specific probabilities.

My brief characterization was not especially well optimized for conveying the complex dialectic in the other post. Nor was it asserting that my conclusion was logically unassailable. I keep saying that if anyone wants to engage with my old post, I'd prefer that they did so in the comments to that post—ensuring that they engage with the real post rather than the inadequate summary I gave here. My ultra-brief summary is not an adequate substitute, and was never intended to be engaged with as such.

On the substantive point: Of course, ideally one would like to be able to "model the entire space of possibilities". But as finite creatures, we need heuristics. If you think my other post was offering a bad heuristic for approximating EV, I'm happy to discuss that more over there.

TFD @ 2025-12-30T16:06 (+1)

I think you have be underestimating to what extent the responses you are getting do speak to the core content of your post, but I will leave a comment there to go into it more.

Vasco Grilo🔸 @ 2025-12-27T22:07 (+8)

Thanks for sharing! I recently left some related comments on a post from Bentham’s Bulldog, and discussed it in a podcast with him.