AGB's Quick takes

By AGB 🔸 @ 2020-12-28T16:13 (+6)

null
AGB @ 2020-12-28T16:13 (+39)

TL;DR: I'm curious what the most detailed or strongly-evidenced arguments are in favour of extinction risk eventually falling to extremely low levels. 

An argument I often see goes to the effect of 'we have a lot of uncertainty about the future, and given that it seems hard to be >99% confident that humanity will last <1 billion years". As written, this seems like a case of getting anchored by percentages and failing to process just how long one billion years really is (weak supporting evidence for the latter is that I sometimes see eerily similar estimates for one million years...). Perhaps this is my finance background talking, but I can easily imagine a world where the dominant way to express probability is basis points and our go-to probability for 'very unlikely thing' was 1 bp rather than 1%, which is 100x smaller. Or we could have have a generic probability analogy to micromorts, which are 100x smaller still, etc. Yet such choices in language shouldn't be affecting our decisions or beliefs about the best thing to do. 

On the object level, one type of event I'm allowed to be extremely confident about is a large conjunction of events; if I flip a fair coin 30 times, the chance of getting 30 heads is approximately one in a billion.

Humanity surviving for a long time has a similar property; if you think that civilisation has a 50% chance of making it through the next 10,000 years, then conditional on that a 50% chance of making it through the next 20,000 years, then 50% for the next 40,000 years, etc. (applying a common rule-of-thumb for estimating uncertain lifetimes, starting with the observation that civilisation has been around for ~10,000 years so far), then the odds of surviving a billion years come out somewhere between 1 in 2^16 and 1 in 2^17, AKA roughly 0.001%.

We could also try to estimate current extinction risk directly based on known risks. Most attempts I've seen at this suggest that 50% to make it through the next 10,000 years, AKA roughly 0.007% per year, is very generous. As I see it, this is because an object-level anlysis of the risks suggests they have rising, not falling as the Lindy rule would imply. 

When I've expressed this point to people in the past, I tend to get very handwavy (non-numeric) arguments about how a super-aligned-AI could dramatically cut existential risk to the levels required; another way of framing the above is that, to envisage a plausible future where we then have >one billion years in expectation, annualised risk needs to get to <0.0000001% in that future. Another thought is that space colonization could make humanity virtually invincible. So partly I'm wondering if there's a better-developed version of these that accounts for the risks that would remain, or other routes to the same conclusion, since this assumption of a large-future-in-expectation seems critical to a lot of longtermist thought. 

Paul_Christiano @ 2021-01-06T03:21 (+33)

Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.

  • It's possible to write "Humanity survives the next billion years" as a conjunction of a billion events (humanity survives year 1, and year 2, and...). It's also possible to write "humanity goes extinct next year" as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say that the second conjunction is different, because the billionth person is very likely to die once the others have died (since there has apparently been some kind of catastrophe), but the same is true for survival. In both cases there are conceivable events that would cause every term of the conjunction to be true,  and we need to address the probability of those common causes directly. Being able to write the claim as a conjunction doesn't seem to help you get to extreme probabilities without an argument about independence.
  • I feel you should be very hesitant to assign 99%+ probabilities without a good argument, and I don't think this is about anchoring to percent. The burden of proof gets stronger and stronger as you move closer to 1, and 100 is getting to be a big number. I think this is less likely to be a tractable disagreement than the other bullets but it seems worth mentioning for completeness. I'm curious if  you think there are other natural statements where the kind of heuristic you are describing (or any other similarly abstract heuristic) would justifiably get you to such high confidences. I agree with Max Daniel's point that it doesn't work for realistic versions of claims like "This coin will come up heads 30 times in a row." You say that it's not exclusive to simplified models but I think I'd be similarly skeptical of any application of this principle. (More generally, I think it's not surprising to assign very small probabilities to complex statements based on weak evidence, but that it will happen much more rarely for simple statements. It doesn't seem promising to get into that though.)
  • I think space colonization is probably possible, though getting up to probabilities like 50% for space colonization feasibility would be a much longer discussion. (I personally think >50% probability is much more reasonable than <10%.) If there is a significant probability that we colonize space, and that spreading out makes the survival of different colonists independent (as it appears it would), then it seems like we end up with some significant probability of survival. That said, I would also assign ~1/2 probability to surviving a billion years even if we were confined to Earth. I could imagine being argued down to 1/4 or even 1/8 but each successive factor of 2 seems much harder. So in some sense the disagreement isn't really about colonization.
  • Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions. I don't think you have to invoke super-aligned AI to justify that conclusion---it's easy to imagine organizing society in a way which drives existing extinction risks to negligible levels, and once that's done it's not clear where you'd get to 90%+ probabilities for new risks emerging that are much harder to reduce. (I'm not sure which step of this you get off the boat for---is it that you can't imagine a world that say reduced the risk of an engineered pandemic killing everyone to < 1/billion per year? Or that you think it's very likely other much harder-to-reduce risks would emerge?)
  • A lot of this is about burden of proof arguments. Is the burden of proof on someone to exhibit a risk that's very hard to reduce, or someone to argue that there exists no risk that is hard to reduce? Once we're talking about 10% or 1% probabilities it seems clear to me that the burden of proof is on the confident person. You could try to say "The claim of 'no bad risks' is a conjunction over all possible risks, so it's pretty unlikely" but I could just as well say "The claim about 'the risk is irreducible' is a conjunction over all possible reduction strategies, so it's pretty unlikely" so I don't think this gets us out of the stalemate (and the stalemate is plenty to justify uncertainty).
  • I do furthermore think that we can discuss concrete (kind of crazy) civilizations that are likely to have negligible levels of risk, given that e.g. (i) we have existence proofs for highly reliable machines over billion-year timescales, namely life, (ii) we have existence proofs for computers if you can build reliable machinery of any kind, (iii) it's easy to construct programs that appear to be morally relevant but which would manifestly keep running indefinitely.  We can't get too far with this kind of concrete argument, since any particular future we can imagine is bound to be pretty unlikely. But it's relevant to me that e.g. stable-civilization scenarios seem about as gut-level plausible to me as non-AI extinction scenarios do in the 21st century.
  • Consider the analogous question "Is it possible to build computers that successfully carry out trillions of operations without errors that corrupt the final result?" My understanding is that in the early 20th century this question was seriously debated (though that's not important to my point), and it feels very similar to your question. It's very easy for a computational error to cascade and change the final result of a computation. It's possible to take various precautions to reduce the probability of an uncorrected error, but why think that it's possible to reduce that risk to levels lower than 1 in a trillion, given that all observed computers have had fairly high error rates? Moreover, it seems that error rates are growing  as we build bigger and bigger computers, since each element has an independent failure rate, including the machinery designed to correct errors. To really settle this we need to get into engineering details, but until you've gotten into those details I think it's clearly unwise to assign very low probability to building a computer that carries out trillions of steps successfully---the space of possible designs is large and people are going to try to find one that works, so you'd need to have some good argument about why to be confident that they are going to fail.
  • You could say that computers are an exceptional example I've chosen with hindsight. But I'm left wondering if there are any valid applications of this kind of heuristic--what's the reference class of which "highly reliable computers" are exceptional rather than typical?
  • If someone said:"A billion years is a long time. Any given thing that can plausibly happen should probably be expected to happen over that time period" then I'd ask about why life survived the last billion years.
  • You could say that "a billion years" is a really long time for human civilization (given that important changes tend to happen within decades or centuries) but not a long time for intelligent life (given that important changes takes millions of years). This is similar to what happens if you appeal to current levels of extinction risk being really high. I don't buy this because life on earth is currently at a period of unprecedentedly rapid change. You should have some reasonable probability of returning to more historically typical timescales of hundreds of millions of years, which in turn gives you a reasonable overall probability on surviving for hundreds of millions of years. (Actually I think we should have >50% probabilities for reversion to lower timescales, since we can tell that the current period of rapid growth will soon be over. Over our history rapid change and rapid growth have basically coincided, so it's particularly plausible that returning to slow-growth will also return to slow-change.)
  • Applying the rule of thumb for estimating lifetimes to "the human species" rather than "intelligent life" seems like it's doing a huge amount of work. It might be reasonable to do the extrapolation using some mixture between these reference classes (and others), but in order to get extreme probabilities for extinction you'd need to have an extreme mixture. This is part of the general pattern why you don't usually end up with 99% probabilities for interesting questions without real arguments---you need to not only have a way of estimating that has very high confidence, you need to be very confident in that way of estimating.
  • You could appeal to some similar outside view to say "humanity will undergo changes similar in magnitude to those that have occurred over the last billion years;" I think that's way more plausible (though I still wouldn't believe 99%) but I don't think that it matters for claims about the expected moral value of the future.
  • The doomsday argument can plausibly arrive at very high confidences based on anthropic considerations (if you accept those anthropic principles with very high confidence). I think many long-termists would endorse the conclusion that the vast majority of observers like us do not actually live in a large and colonizable universe---not at 99.999999% but at least at 99%. Personally I would reject the inference that we probably don't live in a large universe because I reject the implicit symmetry principle. At any rate, these lines of argument go in a rather different direction than the rest of your post and I don't feel like it's what you are getting at.
AGB @ 2021-01-06T09:10 (+6)

Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:

You say you aren't anchoring, in a world where we defaulted to expressing probability in 1/10^6 units called Ms I'm just left feeling like you would write "you should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.". So if it's not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?

My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% - 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair? 

The rest I see as an attempt to justify the extreme confidences inside the product, and I'll have to think about more. The following are gut responses:

I'm not sure which step of this you get off the boat for

I'm much more baseline cynical than you seem to be about people's willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, I'd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether that's correct or not, I don't think its wildly unusual among people who take climate change seriously*, and yet we almost certainly aren't doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.

(Again, the above is a gut response as opposed to a reasoned claim.)

Applying the rule of thumb for estimating lifetimes to "the human species" rather than "intelligent life" seems like it's doing a huge amount of work.

Yeah, Owen made a similar point, and actually I was using civilisation rather than 'the human species', which is 20x shorter still. I honestly hadn't thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.

*"The survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. "

Max_Daniel @ 2021-01-05T11:07 (+15)

I roughly think that there simply isn't very strong evidence for this. I.e. I think it would be mistaken to have a highly resilient large credence in extinction risk eventually falling to ~0.0000001%, humanity or its descendants surviving for a billion years, or anything like that.

[ETA: Upon rereading, I realized the above is ambiguous. With "large" I was here referring to something stronger than "non-extreme". E.g. I do think it's defensible to believe that, e.g. "I'm like 90% confident that over the next 10 years my credence in information-based civilization surviving for 1 billion years won't fall below 0.1%", and indeed that's a statement I would endorse. I think I'd start feeling skeptical if someone claimed there is no way they'd update to a credence below 40% or something like that.]

I think this is one of several reasons for why the "naive case" for focusing on extinction risk reduction fails. (Another example of such a reason is the fact that, for most known hazards, collapse short of extinction seems way more likely than immediate extinction, that as a consequence most interventions affect both the probability of extinction and the probability and trajectory of various collapse scenarios, and that the latter effect might dominate but has unclear sign.)

I think the most convincing response is a combination of the following. Note, however, that the last two mostly argue that we should be longtermists despite the case for billion-year futures being shaky rather than defenses of that case itself.

  • You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for "modesty" - i.e. not ruling out very long futures - rests largely on model uncertainty, i.e. our inability to confidently identify the 'correct' model for reasoning about the length of the future.
    • For example, suppose I produce a coin from my pocket and ask you to estimate how likely it is that in my first 30 flips I get only heads. Your all-things-considered credence will be dominated by your uncertainty over whether my coin is strongly biased toward heads. Since 30 heads are vanishingly unlikely if the coin is fair, this is the case even if your prior says that most coins someone produces from their pocket are fair: "vanishingly unlikely" here is much stronger (in this case around ) than your prior can justifiably be, i.e. "most coins" might defensibly refer to 90% or 99% or 99.99% but not 99.9999999%.
    • This insight that extremely low credences all-things-considered are often "forbidden" by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
    • Note that I think it's still true that there is a possible epistemic state (and probably even model we can write down now) that rules out very  long futures with extreme confidence. The point just is that we won't be able to get to that epistemic state in practice.
    • Overall, I think the lower bound on the all-things-considered credence we should have in some speculative scenario often comes down to understanding how "fundamental" our model uncertainty is. I.e. roughly: to get to models that have practically significant credence in the scenario in question, how fundamentally would I need to revise my best-guess model of the world?
      • E.g. if I'm asking whether the LHC will blow up the world, or whether it's worth looking for the philosopher's stone, then I would need to revise extremely fundamental aspects of my world model such as fundamental physics - we are justified in having pretty high credences in those.
      • By contrast, very long futures seem at least plausibly consistent with fundamental physics as well as plausible theories for how cultural evolution, technological progress, economics, etc. work.
        • It is here, and for this reason, that points like "but it's conceivable that superintelligent AI will reduce extinction risk to near-zero" are significant.
      • Therefore, model  uncertainty will push me toward a higher credence in a very long future than in the LHC blowing up the world (but even for the latter my credence is plausibly dominated by model uncertainty rather than my credence in this happening conditional on my model of physics being correct).
  • Longtermism goes through (i.e. it looks like we can have most impact by focusing on the long-term) on much less extreme time scales than 1 billion.
    • Some such less extreme time scales have "more defensible" reasons behind them, e.g. outside view considerations based on the survival of other species or the amount of time humanity or civilization have survived so far. The Lindy rule prior you describe is one example.
  • There is a wager for long futures: we can have much more impact if the future is long, so these scenarios might dominate our decision-making even if they are unlikely.
    • (NB I think this is a wager that is unproblematic only if we have independently established that the probability of the relevant futures isn't vanishingly small. This is because of the standard problems around Pascal's wager.)

That all being said, my views on this feel reasonably but not super resilient - like it's "only" 10% I'll have changed my mind about this in major ways in 2 years. I also think there is room for more work on how to best think about such questions (the Ord et al. paper is a great example), e.g. checking that this kind of reasoning doesn't "prove too much" or leads to absurd conclusions when applied to other cases.

AGB @ 2021-01-05T20:21 (+4)

Thanks for this. I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead. 

On your first bullet:

You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for "modesty" - i.e. not ruling out very long futures - rests largely on model uncertainty...

...This insight that extremely low credences all-things-considered are often "forbidden" by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).

I'll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events. 

Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% - 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was 'survive the next year' if I wanted to make the requirements even more extreme. 

Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the 'correct' model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they won't die in the next second. 

Max_Daniel @ 2021-01-06T17:01 (+4)

my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do).

I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren't 'forbidden' in general.

(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)

I still think that the distinction between credence/probabilities within a model and credence that a model is correct are is relevant here, for reasons such as:

  • I think it's often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
    • Often when it seems we have extreme credence in a model this just holds "at a certain level of detail", and if we looked at a richer space of models that makes more fine-grained distinctions we'd say that our credence is distributed over a (potentially very large) family of models.
  • There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the 'expected credence' across models) and being highly confident in an extreme credence;
    • I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think it'll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event we're considering. (E.g. ~all models agree that I wont't spontaneously die in the next second, or that Santa Clause isn't going to appear in my bedroom.)
  • When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which  events the credence is extreme.
    • Taken together (i.e. across events/decisions) your all-things-considered credences might look therefore look "funny" or "inconsistent" (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.

I acknowledge that I'm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what's going on I would need to spell out what exactly I mean by "often" etc. (Because as I said I do agree that these claims don't always hold!)

Owen_Cotton-Barratt @ 2021-01-05T20:55 (+4)

Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.

That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).

On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).

Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.

Max_Daniel @ 2021-01-06T09:18 (+2)

I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead. 

To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.

Owen_Cotton-Barratt @ 2021-01-05T21:11 (+10)

One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.

The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).

Max_Daniel @ 2021-01-06T18:22 (+4)

If I understand you correctly, the argument is not "autopoietic systems have persisted for billions of years" but more specifically "so far each new 'type' of such systems has persisted, so we should expect the most recent new type of 'information-based civilization' to persist as well".

This is an interesting argument I hadn't considered in this form.

(I think it's interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weird - like eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactions - that the argument seems quite vulnerable to the objection that knowing that "some kind of autopoietic system will be around for billions of years" isn't that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that. 

[Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesn't want to commit itself to such views.])

However, I have some worries about survivorship bias: If there was a "failed major transition in evolution", would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction that's as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 years - and similarly for sphexxual selection, sphexxxual selection, ... ? Such that with full knowledge we'd conclude the reverse from your conclusion above, i.e. "almost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as well"?

(FWIW my guess is that the answer actually is "our understanding of the history of evolution is sufficiently good that together with broad priors we can rule out at least an extremely high number of such 'failed transitions'", but I'm not sure and so I wanted to mention the possible problem.)

Jonas Vollmer @ 2021-05-17T09:03 (+4)

If there were lots of failed major transitions in evolution, that would also update us towards there being a greater number of attempted transitions than we previously thought, which would in turn update us positively on information-based civilization emerging eventually, no? Or are you assuming that these would be too weird/different from homo sapiens such that we wouldn't share values enough?

Furthermore, sexual selection looks like a fairly simple and straightforward solution to the problem 'organisms with higher life expectancy don't evolve quickly enough', so it doesn't look like there's a lot of space left for any alternatives.

kierangreig @ 2021-01-04T20:19 (+10)

Here’s a relevant thread from ~5 years ago(!) when some people were briefly discussing points along these lines. I think it illustrates both some similar points and also offers some quick responses to them. 

Please do hit see in context to see some further responses there!

And agree, I would also like to further understand the arguments here :)

AGB @ 2021-01-05T10:52 (+4)

Thanks for the link. I did actually comment on that thread, and while I didn't have it specifically in mind it was probably close to the start of me asking questions along these lines.

Linch @ 2021-08-02T01:58 (+2)

To answer your linguistic objection directly, I think one reason/intuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of "fairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the result" has a failure rate likely between 0.1% and 5%.

For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the "not") and not understanding the data source. 

This isn't decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty. 

Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political experts' "utter confidence" translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think I'd say something similar about 10-90% being range of "reasonable" probabilities even if we use percentage-point based language.