Terminate deliberation based on resilience, not certainty

By Gregory Lewis🔸 @ 2022-06-05T20:08 (+152)

BLUF:

“We should ponder until it no longer feels right to ponder, and then to choose one of the acts it feels most right to choose… If pondering comes at a cost, we should ponder only if it seems we will be able to separate better options from worse options quickly enough to warrant the pondering”  - Trammell  

Introduction

Many choices are both highly uncertain, highly consequential, and difficult to develop a highly-confident impression of which option is best. In EA-land, the archetypal example is career choice, but similar dilemmas are common in corporate decision-making (e.g. grant-making, organisational strategy) and life generally (e.g. “Should I marry Alice?” “Should I have children with Bob?”). 

Thinking carefully about these choices is wise, and my impression is people tend to err in the direction of too little contemplation - stumbling over important thresholds that set the stage for the rest of their lives. One of the (perhaps the main) key messages of effective altruism is our altruistic efforts err in a similar direction. Pace more cynical explanations, I think (e.g.) the typical charitable donor (or typical person) is “using reason to try and do good”. Yet they use reason too little and decide too soon whilst the ‘returns to (further) reason’ remain high, and so typically fall far short of what they could have accomplished.  

One can still have too much of a good thing: ‘reasonableness’ ‘prudence’ or even ‘caution’ can be wise, but ‘indecisiveness’ less so. I suspect many can recall occasions of “analysis paralysis”, or even when they suspect prolonged fretting worsened the quality of the decision they finally made. I think folks in EA-land tend to err in this opposite direction, and I find myself giving similar sorts of counsel on these themes to those who (perhaps unwisely) seek my advice. So I write. 

Certainty, resilience, and value of (accessible) information 

It is widely appreciated that we can be more certain (or confident) of some things than others: I can be essentially certain I have two eyes, whilst guessing with little confidence whether it will rain tomorrow. We can often (and should even oftener) use numbers and probability to express different levels of certainty - e.g. P(I have two eyes) ~ 1 (- 10^-(lots)); P(rain tomorrow) ~ 0.5.

Less widely appreciated is the idea that our beliefs can vary not only in their certainty but how much we expect this certainty to change. This ‘second-order certainty’ sometimes goes under the heading ‘credal (ÂŹ)fragility’ or credal resilience. These can come apart, especially in cases of uncertainty. [1] I might think the chances of ‘The next coin I flip will land heads’ and ‘Rain tomorrow’ are 50/50, but I’d be much more surprised if my confidence in the former changed to 90% than the latter: coins tend fair, whilst real weather forecasts I consult often surprise my own guesswork, and prove much more reliable. For coins, I have resilient uncertainty; for the weather, non-resilient uncertainty.

Credal resilience is - roughly - a forecast of the range or spread of our future credences. So when applied to ourselves, it is partly a measure of what resources we will access to improve our guesswork. My great-great-great-grandfather, sans access to good meteorology, probably had much more resilient uncertainty about tomorrow’s weather. However the ‘resource’ need not be more information, sometimes it is simply more ‘thinking time’: a ‘snap judgement’ I make on a complex matter may also be one I expect to shift markedly if I take time to think it through more carefully, even if I only spend that time trying to better weigh information I already possess.  

Thus credal resilience is one part of value of information (or value of contemplation): all else equal, information (or contemplation) applied to a less resilient belief is more valuable than a more resilient one, as there’s a higher expected ‘yield’ in terms of credences changing (hopefully/expectedly for the better). Of course, all else is seldom equal; accuracy on some beliefs can be much more important than others: if I was about to ‘call the toss’ for a high profile sporting event, eking out even a miniscule ‘edge’ might be worth much more than the much easier to obtain, and much more informative, weather forecast for tomorrow.

Decision making under (resilient?) uncertainty

Most ‘real life’ decision-making under uncertainty (so most real life decision-making in general) has the hidden bonus option of ‘postpone the decision to think about your options more’. When is taking the bonus option actually a bonus, rather than a penalty?

Simply put: “Go with your best guess when you’re confident enough” is the wrong answer, and “Go with your best guess when you’re resilient enough” is the right one. 

The first approach often works, and its motivation is understandable. For low-stakes decisions (“should I get this or that for dinner?”), I might be happy only being 60% confident I am taking the better option. For many high-stakes decisions (e.g. “what direction should I take my career for the next several years?”) I’d want to be at least 90% sure I’m making the right call. 

But you can’t always get what you want, and the world offers no warranty you can make all (or any) of your important decisions with satisfying confidence. Thus this approach runs aground in cases of resilient uncertainty: you want to be 90% sure, but you are 60% sure, and try as you might you stay 60% sure. Yet you keep trying in the increasingly forlorn hope the next attempt will excavate the crucial nugget of information which makes the decision clear.

Deciding once you are resilient enough avoids getting stuck in this way. In this approach, the ‘decision to decide’ is not based on achieving some aspirational level of certainty, but comparing the marginal benefits of better decision accuracy versus earlier decision execution. If my credence for “I should - all things considered [2]- go to medical school” is 0.6, but I thought it would change by 0.2 on average if I spent a gap year to think it over (e.g. a distribution of Beta(3,2)), maybe waiting a year makes sense: there’s maybe a 30% chance I would think medical school wasn’t the right choice after all, and averting that risk may be worth more than the 70% chance further reflection gets the same result and I delay the decision for a year.   

Suppose I take that year, and now my credence is 0.58, but I think this would only change by +/- 0.05 if I took another year out (e.g. Beta(55, 40)): my deliberation made me more uncertain, but this uncertainty is much more resilient. Taking another gap year to deliberate looks less reasonable: although I think there’s a 42% chance this decision is the wrong call, there’s only a 6% chance I would change my mind with another year of agonising. My best guess is resilient enough; I should take the plunge.[3]

Track records are handy in practically assessing credal resilience

This is all easier said than done. One challenge is applying hard numbers to a felt sense of uncertainty is hard. Going further and (like the toy example) applying hard numbers to forecast the distribution of what this felt sense of uncertainty would be after some further hypothesised information/contemplation to assess credal resilience looks nigh-impossible. “I’m not sure, then I thought about it for a bit, now I’m still not sure. What now?”

However, this sort of ‘auto-epistemic-forecasting’, like geopolitical forecasting, is not as hard as it first seems. In the latter, base-rates, reference classes, and trend extrapolation are often enough to at least get one in the right ballpark. In a similar way, tracking one’s credal volatility over previous deliberation can give you a good idea about the resilience of one’s uncertainty, and the value of further deliberation.

Imagine a graph which has ‘Confidence I should do X’ on the Y-axis (sorry), and some metric of amount of deliberation (e.g. hours, number of people you’ve called for advice, pages in your option-assessment google doc) on the X-axis. What might this graph look like?

The easy case is where one can see confidence (red line) trend in a direction with volatility (blue dashed envelope) steadily decreasing. This suggests one's belief is already high confidence and high-resilience, and - extrapolating forward - spending a lot more time deliberating is unlikely to change this picture much.

 

Graph 1

Per previous remarks, the same applies even if deliberation resolves to a lesser degree of confidence. This suggests resilience and maybe decision time - as yet further deliberation can be expected to result in yet more minor perturbations around your (uncertain) best guess.

Graph 2

Alternatively, if the blue lines haven’t converged much (but are still converging), this suggests non-resilient uncertainty, and further deliberation can reasonably be expected to give further resolution, and this might be worth the further cognitive investment.

Graph 3

The most difficult cases are ‘stable (or worsening) vacillation’. After perhaps some initial progress, one finds one's confidence keeps hopping up and down in a highly volatile way, and this volatility does not appear to be settling despite more and more time deliberating: one morning you’re 70% confident, that evening 40%, next week 55%, and so on. 

Given the post’s recommendation (in graphical terms) is ‘look at how quickly the blue lines come together’, probably the best option here is to decide when it is clear they are no longer converging. The returns to deliberation seem to have capped out, and although your best guess now is unlikely to be the same as your best guess at time t+n, it is also unlikely to be worse in expectation either.

Graph 4

  

Naturally, few of us steadily track our credences re. some proposition over time as we deliberate further upon it (perhaps a practice worth attempting). Yet memory might be good enough for us to see what kind of picture applies. If I recall “I thought I should go to medical school, and after each ‘thing’ I did to inform this decision (e.g. do some research, shadow a doctor, work experience in a hospital) this impression steadily resolved”, perhaps it is the first or second graph. If I recall instead “I was certain I should become a doctor, but then I thought I should not because of something I read on the internet, but then I talked to my friend who still thought I should, and now I don’t know”, perhaps it is more the third graph.

This approach has other benefits. It can act as a check on the amounts of deliberation we are amassing, both in general and with respect to particular deliberative activities. In a general sense, we can benchmark how long quasi-idealized versions ourselves would mull over a decision (at least bounded to some loose order of magnitude - e.g. “more than a hour, but less than a month”). Exceeding these bounds in either direction can be tripwire to consider we are being too reckless or diffident. It can also be applied to help gauge the value of particular types of information or deliberation: if my view on taking a job changed markedly on talking to an employee, maybe it would be worth me talking to a couple more; if my decision has stayed at mediocre % as my deliberation google doc increased from 10 pages to 30, maybe not much is likely to change if I add another 20 pages.

Irresilient consolation

Although the practical challenges of deploying resilience to decision-making are not easy, I don’t think it is the main problem. In my experience, folks spending too long deliberating on a decision with resilient uncertainty are not happily investigating their dilemma, looking forward to figuring it out, yet blissfully unaware that their efforts are unlikely to pay off and they would be better off just going with their best guess. Rather, their dilemma causes them unhappiness and anxiety, they find further deliberation aversive and stressful, and they often have some insight this agonised deliberation is at least sub-optimal, if not counter-productive, if not futile. You at-least-kind-of know you should just decide, but resort to festinating (and festering) rumination to avoid confronting the uncertain dangers of decision. 

One of the commoner tragedies of the human condition is that insight is not sufficient for cure. I may know that airliners are extremely safe means of travel, that I plausibly took greater risks driving to the airport than on the flight itself, yet nonetheless check the wings haven’t fallen off after every grumble of turbulence. Yet although ‘knowing is less than half the battle’, it remains a useful ally - in the same way cognitive-behavioural therapy is more than, but also partly, ‘getting people to argue themselves out of their depression and anxiety’. 

The stuff above is my attempt to provide one such argument. There are others I’d often be minded to offer too. In ascending order of generality and defensibility. 

“It’s not that big a deal”: Folks often overestimate the importance and reversibility of their decisions: in the same way the anticipated impact of an adverse life event on wellbeing tends to be greater than its observed impact, the costs of making the wrong decision may be inflated. Of my medical school cohort, several became disenchanted with medicine at various points, often after committing multiple years to the vocation; others simply failed the course and had to leave. None of this is ideal, but it was seldom (~never) a disaster that soured their life forevermore.

Similarly, big decisions can often be (largely) unwound. Typical ‘exits’ from medical school are (if very early) leaving to re-apply to university; (if after a couple of years) converting their study into a quasi-improvised bachelor’s in biological science; (if later) graduating as ‘doctor’ but working in another career afterwards. This is again not ideal: facially, spending multiple years preparing for a career one will not go on to practice is unlikely to be optimal - but the expected costliness is not astronomical. 

Although I often say this, I don’t always. I think typical ‘EA dilemmas’ like “Should I take this job or that one?” “What should I major in?” are lower-stakes than the medical school examples above,[4] but that doesn’t mean all are. The world also offers no warranty that you’ll never face a decision where picking the wrong option is indeed hugely costly and very difficult to reverse. Doctrinal assertions otherwise are untrustworthy ‘therapy’ and unwise advice.

“Suck it and see”: trying an option can have much higher value of information than further deliberation. Experience is not only the teacher of fools: sometimes one has essentially exhausted all means to inform a decision in advance, leaving committing to one option and seeing how it goes the only means left. Another way of looking at it: it might be more time efficient to trial doing an option for X months rather than deliberating on whether to take it for X + n months. [5]

Obviously this consideration doesn’t apply for highly irreversible choices. But for choices across most of the irreversibility continuum, it can weigh in favour (and my impression is the “You also get information (and maybe the best available remaining information) from trying something directly, not just further deliberation” consideration is often neglected). 

Another caveat is when the option being trialled is some sort of co-operative endeavour: others might hope to rely on your commitment, so may not be thrilled at (e.g.) a 40% risk you’ll back out after a couple of months. However, if you’re on the same team, this seems apt for transparent negotiation. Maybe they’re happy to accept this risk, or maybe there’s a mutually beneficial deal between you which can be struck: e.g. you commit to this option for at least Y months so even if you realise it is not for you earlier you will stick ‘stick it out’ a while to give them time to plan and mitigate for you backing out. [6]

“Inadequate amounts ventured, suboptimal amounts gained (in expectation)”: optimal strategy is reconciled to some exposure to the risks of mistake. As life is a risky prospect, you can seldom guarantee for yourself the best outcome: morons can get lucky, and few plans, no matter how wise, are proof from getting wrecked by sufficiently capricious fortune. You also can’t promise yourself to always adopt the optimal ex ante strategy either, but this is a much more realistic and prudentially worthwhile ambition.

One element of an ‘ideal strategy’ is an ideal level of risk tolerance. Typically, some available risk reductions are costly, so there comes a point where accepting the remaining risk can be better than further attempts to eliminate it - cf. ‘Umeshisms’ like “If you never miss a flight, you are spending too much time at airports”.

Although the ideal level of risk tolerance varies (cf. “If you’re never in a road traffic accident, you’re driving too cautiously”) it is ~always some, and for altruistic benefit the optimal level of (pure) risk aversion is typically ~zero. Thus optimal policy will calculate many substantial risks as nonetheless worth taking. 

This can suck (worse, can be expected to suck in advance), especially in longtermist-land where the impact usually has some conjuncts which are “very uncertain, but plausibly very low probability”: maybe the best thing for you to do is devote yourself to some low probability hedge or insurance, where the likeliest ex post outcome was “this didn’t work” or (perhaps worse from the point of view of your own reflection) “this all was just tilting at windmills”. 

Perhaps one can reconcile oneself to the point of view of the universe: it matters what rather than who, so best complementing the pre-existing portfolio is the better metric to ‘keep score’; or perhaps one could remember that one really should hope the risks one devotes oneself to are illusory (and ideally, all of them are); or perhaps one should get over it: “angst over inadequate self-actualization” is neither an important cause area nor an important decision criterion. But ultimately, you should do the right thing, even if it sucks.

  1. ^

    High levels of certainty imply relatively high levels of credal resilience: the former places lower bounds on the latter. To motivate: suppose I was 99% sure of rain on Friday. Standard doctrine is my current confidence should also be the expected value of my future confidence. So in the same way 99% confidence is inconsistent with ‘I think there’s a 10% chance I will believe in on saturday P(rain friday) = 0% (i.e. actually 10% chance it doesn’t rain after all), it is also inconsistent with ‘I think there’s a 20% chance in an hour I will be 60% confident it will rain on Friday. Even if the rest of my probability mass was at 100%, this nets out to 92% overall (0.8*1 + 0.2*0.6).

  2. ^

    Including, for example, all the (many) other options I have besides going to medical school.

  3. ^

    One family of caveats (mentioned here for want of a better location) are around externalities. Maybe my own resilient confidence is not always sufficient to justify choices I make that could adversely affect others. Likewise a policy of action in these cases may go wrong when decision-makers are poorly calibrated and do not consult with one another enough (cf. unilateralists curse). 

    I do not think this changes the main points, which apply to something like the resilience of one’s ‘reasonable, all things considered’ credence (including peer disagreement, option value, and sundry other considerations). Insofar as folks tend to err in the direction of being headstrong because they neglect their propensity to be overconfident, or fail to attend to their impact on others, it is worth stressing these as general reminders.

  4. ^

    In terms of multiplier stacking (q.v.), if not absolute face value (but the former matters more here).

  5. ^

    A related piece of advice I often give is to run ‘trial’ and ‘evaluation’ in series rather than in parallel. So, rather than commit to try X, but spend your time during this trial simultaneously agonising (with the ‘benefit’ of up-to-the-minute signals from how things are going so far) about whether you made the right choice, commit instead to try X for n months, and postpone evaluation (and the decision to keep going, stop, or whatever else) to time set aside afterwards.

  6. ^

    I guess the optimal policy of ‘flakiness’ versus ‘stickiness’ is very context dependent. I can think of some people who kept ‘job hopping’ rapidly who I suspect would have been better served if they stuck with each option a while longer (leave alone the likely ‘transaction costs’ on their colleagues). However, surveying my anecdata I think I see more often folks sticking around too long in suboptimal (or just bad) activity. 

    One indirect indicator is I hear too frequently misguided reasons for sticking around like “I don’t want to make my boss sad by leaving”, or “I don’t want to make life difficult for my colleagues filling in for me and finding a replacement”: employment contracts are not marriage vows, and your obligations here are (rightly) fully discharged by giving fair notice and helping transition/handover.  


Lizka @ 2022-06-06T10:30 (+14)

Thanks for this post! As someone who's agonized over some career (and other) decisions, I really appreciate it. It also seems to apply for e.g. shallow investigations into potential problems/causes (e.g., topic). Also, I love the graphs. 

A few relevant posts and thoughts: 

Mauricio @ 2022-06-05T23:07 (+14)

Thanks for this! I wonder how common or rare the third [edit: oops, meant "fourth"] type of graph is. I have an intuition that there's something weird or off about having beliefs that act that way (or thinking you do), but I'm having trouble formalizing why. Some attempts:

Thomas Kwa @ 2022-06-06T18:01 (+12)

Succinctly, beliefs should behave like a martingale, and the third and fourth graphs are probably not a martingale. It's possible to update based on your expected evidence and still get graphs like in 3 or 4, but this means you're in an actually unlikely world.

That said, I think it's good to keep track of emotional updates as well as logical Bayesian ones, and those can behave however.

Gregory_Lewis @ 2022-06-08T10:57 (+7)

Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:

  • As you and Kwa note, having a  'static' envelope you are bumbling between looks like a violation of the martingale property - the envelope should be tracking the current value more (but I was too lazy to draw that).
  • I agree all else equal you should expect resilience to increase with more deliberation - as you say, you are moving towards the limit of perfect knowledge with more work. Perhaps graph 3 and 4 [I've added numbers to make referring easier] could signal that you're moving from 10.1% to 10.2% in this hypothetical range from ignorance to omniscience.
  • Related to Kwa's point, another benefit of tracking one's beliefs is not only figuring out when to terminate deliberation, but also to 'keep score' about how rational one's beliefs appear to be. Continued volatility (in G3, but also G4) could mean you are (rationally) in the situation where your weak prior is getting buffetted by a lot of strong evidence; but it could also mean you are under-damped and over-updating. 
Charles He @ 2022-06-06T03:28 (+2)

This seems sort of obvious so maybe I’m missing something?

Imagine there are two types of bins. One bin only has red balls. The other bin has both red and yellow balls in equal proportion.

You have one bin and you don’t know which one. You pick up balls successively from the bin and you are making an estimator on the color of the ball you pick up.

Imagine picking up 5 balls in a row that are red. You logically believe that the next ball is will be red with more than 50% probability.

Then, for the 6th ball, it’s yellow and you’re back to 50%.

I think the analogy of the bins seems dynamic but apply for assessing reports, when you’re trying to figure out an latent state and there isn’t a time element.

There’s many situations in the world that seem to be like this, but it feels ideological or sophomoric to say it?

  • Fujiyama the end of history, where confidence seems misplaced on the dominance and stability of democracy
  • Qing dynasty belief in its material superiority over outsiders
  • Reading Charles He’s forum comment history and deciding if he’s reasonable
Mauricio @ 2022-06-06T04:43 (+4)

(I'm understanding your comment as providing an example of a situation where volatility goes to 0 with additional evidence.)

I agree it's clear that this happens in some situations -- it's less immediately obvious to me whether this happens in every possible situation.

(Feel free to let me know if I misread. I'm also not sure what you mean by "like this.")

Charles He @ 2022-06-06T05:02 (+2)

(I'm understanding your comment as providing an example of a situation where volatility goes to 0 with additional evidence.)

I think "volatility" (being able to predict yellow or red ball) is going higher?

But I feel like there is a real chance I'm talking past you and maybe wrong?

For example, you might be talking about forming beliefs about volatility. In my example, beliefs about volatility upon seeing the yellow ball are now more stable over time (even if volatility rises) as you know which bin you’re drawing from. 
 

(Feel free to let me know if I misread. I'm also not sure what you mean by "like this.")

I guess I'm just repeating my example, where searches or explorations are revealing something like "a new latent state", so that previous information that was being used to form beliefs, are no longer relevant.

It's true this statement doesn't have much evidence behind it (but partially because I'm sort of confused now what exactly the example is talking about).

Charles He @ 2022-06-06T05:33 (+5)

Ok, I didn’t understand the OP’s examples or what he was saying (so I missed sort of the point of his post). So I think he's saying in the fourth example the range of reasonable beliefs could increase over time by collecting more information.

This seems unlikely and unnatural so I think you’re right. I retract my comment.

Mauricio @ 2022-06-06T06:36 (+4)

Ah sorry, I meant to use "volatility" to refer to something like "expected variance in one's estimate of their future beliefs," which is maybe what you refer to as "beliefs about volatility."

michel @ 2024-07-03T17:11 (+9)

FYI, I was making a difficult career decision a few months ago and found this post helpful. Thanks for writing it!

machinaut @ 2022-06-19T20:16 (+9)

I really like idea here and think it's presented well.  (A+ use of illustrative graphs.)  The tradeoff of "invest more in pondering vs invest in exploring object-level options" is very common.

Two thoughts I'd like to add to this post:

re-initating deliberation & non-monotonic credence

I think that the credal ranges are not monotonically narrowing, mostly because we're imperfect/bounded reasoners.

There are events in peoples lives / observations / etc that cause us to realize that we've incorrectly narrowed credence in the past and must now re-expand our uncertainty.

This theory still makes a lot of sense in that world -- where termination might be followed up by re-initiation in the future, and uncertainty-expanding events would constitute a clear trigger for that re-initiation.

Value of information for updating our meta-uncertainty

Given the above point about our judgement on when/how to narrow credal ranges being flawed, I think we should care about improving at that meta-judgement.

This adds an additional value of information to pondering more -- that we improve our judgement for when to stop pondering.

I think this is important to call out because this update is highly asymmetric -- it's much easier to get feedback that you pondered too long (by doing extra ponding for very little update) than to get feedback that you pondered too short (because you don't know what you'd think if you pondered longer).

In cases where there is this very asymmetric value of information, I think a useful heuristic is "if in doubt, ponder too long, rather than too short" (this doesn't really account for that fact that its not Yes/No as much as it is opportunity cost of other actions, but hopefully the heuristic can be adapted to be useful)

(Coda: this seems more like rationality than the modal EA forum post -- maybe would get additional useful/insightful comments on LW)

Sophia @ 2022-06-06T10:42 (+9)

This post was great. 

I feel like my thinking around my daily diet is a bit like the third graph (should I be vegan? Should I not care because my daily meal choices are small compared to what I do with my career/how productive I am, if I have a high enough probability of getting myself on a high impact career pathway? I find considerations just tend to bounce my around rather than me settling on a confident view despite having thought about this on and off for many years)

Benjamin Stewart @ 2022-06-05T23:52 (+9)

This was great. This question may be too meta for its own good:

Are there plausible situations where the trend of volatility isn't stable over time? I.e. if the blue-lined envelope appears to be narrowing over deliberative effort, but then again expands wildly, or vice-versa. Call it 'chaotic volatility' for reference. 

 This might be just an extreme version of the fourth graph, but it actually seems even worse. At least in the fourth graph you might be able to recognise you're in stable or worsening volatility - in chaotic volatility you could be mistaken about what kind of epistemic situation you're in. You could think there's little further to be gained, but you're actually just before a period of narrowing volatility. Or think you're settled and confident, but with a little more deliberation a new planet swims into your ken.

One example I could think of is if someone in the general public is doing some standard career-choice agonising, doing trials, talking to people, etc. and is getting greater resilience for an option. And then on a little further reading they find 80,000 Hours and EA, and all of a sudden there's a ton more to think about and their previous resilience breaks. 

I don't know if anything action-relevant comes from considering this situation, beyond what the post already laid out. Maybe it's just trying to keep an eye out for possible  quasi-'crucial considerations' for their own choices or something. 

Ramiro @ 2022-06-09T13:21 (+6)

I want to print a poster with your last paragraph

Dimitri Molerov @ 2022-06-26T21:28 (+3)

What a wonderful post!

I wonder if credal resilience (from the reasoner side) is the same as belief stability (from the belief side).

This could be turned into an easy online decision-support tool: input your goal, input your success metrics, guess your range of low-high impact by pursuing option X vs. opportunity cost of option Z, and how certain do you feel about your decision? Would one of the following increase your confidence: [set of options for decision support]. If you are building something, let me know.

I second machinaut and Benjamin Stewart's comments. 

My current work is in the area of rationality, (mis)information and information search, where new info gained could help uncover own biases or add weight to the alternative decision (while arguments continue to aggregate). In addition to narrowing down the corridor for a change of mind over time, there is a chance that a qualitative epistemic leap may occur (e.g., when your horizon of available options expands through new 'unknown unknowns' info, or when a new larger framework is uncovered that requires reappraisal). Here the range of options expands, before narrowing down again - subjective uncertainty in the shape of a fir tree pointed to the right. Including these considerations in decisions might not be too hard with a bit of training.

Moreover, a decision could be transformed by analyzing features of the options and choosing a third best of both option or no decision at all. Not sure how to represent these.

While the volatility from unknown unknowns might seem to support epistemic relativism at first, any new information warranting an expansion would seem to also imply a broader or more complex view. Over time, it becomes increasingly unlikely to find such new information that supports a 1Upped worldview. So after initial known types of resources are exhausted, and credal resilience increases, one can reasonably settle for a decision - while remaining open to 'game-changing information'. But if game-changing information is obtained, one could also be excused to reappraise and reverse the earlier decision; in this case reversibility/transition paths should be considered more prominently to minimize sunk cost.

MattBall @ 2022-06-08T21:23 (+1)

Nah. Never terminate deliberation.

;-)