[Linkpost] Bracketing Cluelessness

By JesseClifton @ 2025-09-24T20:44 (+38)

This is a linkpost to https://longtermrisk.org/files/Bracketing_Cluelessness.pdf

“Bracketing Cluelessness” is a philosophy paper by Sylvester Kollin, Anthony DiGiovanni, Nicolas Macé, and myself, which presents a new approach to decision-making in the face of consequentialist cluelessness.

Abstract:

Consequentialists must take into account all possible consequences of their actions, including those in the far future. But due to the difficulty of getting a grasp on these consequences and producing non-arbitrary probabilities for them, it seems that consequentialists should often consider themselves clueless about which option is best. Contrary to orthodox consequentialism, however, there is a common-sense intuition that one should bracket those consequences which one is clueless about. Building on a model involving imprecise probability, we develop two novel alternatives to orthodoxy which capture this intuition. On bottom-up bracketing, we set aside those beneficiaries for whom we are clueless what would be best, and then base the overall verdict on the remainder. On top-down bracketing, we instead base the overall verdict on what would be best for the largest subsets of beneficiaries relative to which we are not clueless. The two are not equivalent: the former violates statewise dominance, whereas the latter does not. The main objection which applies to both kinds of bracketing is that they do not rank prospects acyclically. Our response includes showing how a natural way of generalising bracketing to the dynamic setting avoids value-pumps. Finally, we argue that bracketing has important implications for real-world altruistic decision-makers, favouring neartermism over longtermism.


Ben_West🔸 @ 2025-09-25T03:00 (+11)

Thanks for writing this up and sharing. I find myself pretty sympathetic to the idea that people generally do better when they focus on the first order consequences of their actions and I appreciate this as a formalization of that intuition. 

As with many claims about incomparability, I want to wave my arms wildly here and say "But obviously these things are comparable!" E.g. take two probability measures  from your credal set and some event  such that  and .  I offer you the following bet: if  you give me $10^10, if not I give you $1. I understand you to be saying that it's indeterminate whether taking this bet is good since  but . But surely you wouldn't actually take this bet? 

Am I misunderstanding something?

JesseClifton @ 2025-09-26T12:29 (+8)

Thanks, Ben!

It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)

But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’s the probability that the F of the next X is less than ½?” I have no clue what X and F mean. In particular, I have no idea if F is in “natural” units that would compel me to put a uniform prior over F-values. Why not a uniform prior over F^2 or F^-100? So it does seem sensible to have maximally imprecise beliefs here, and to say it’s indeterminate whether we should take bets like yours.

Yes, it feels bad not to strictly prefer a bet which pays 10^10 if F < ½. But adopting a precise prior would commit me to turning down other bets that look extremely good on other arbitrarily-chosen priors, which also feels bad.

Michael St Jules 🔸 @ 2025-10-09T12:35 (+2)

FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there's some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1/2) = P(1-F < 1/2) = P(F < 1/2), so P(F < 1/2) ≤ 1/2, and strictly less iff P(F = 1/2) > 0.

If you can rule out P(F = 1/2) > 0 (say by an additional assumption), or the bet were on F ≤ 1/2 instead of F < 1/2, then the probability would just be 1/2.

Ben_West🔸 @ 2025-10-03T14:16 (+2)

Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it's less than 1/2 I pay you a dollar and if it's greater than 1/2 you pay me some large amount of money. 

I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I'm not understanding the types of scenarios you are imagining.

JesseClifton @ 2025-10-08T17:21 (+1)

Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)

(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)