What is existential security?
By MichaelAđ¸ @ 2020-09-01T09:40 (+34)
Summary
In The Precipice, Toby Ord defines an existential risk as âa risk that threatens the destruction of humanityâs longterm potentialâ. This could involve extinction, an unrecoverable collapse, or an unrecoverable dystopia. (See also.)
Ord uses the term existential security to refer to âa place of safety - a place where existential risk is low and stays lowâ. This doesnât require reaching a state with zero existential risk per year. But it requires that existential risk per year either (a) indefinitely trends downwards (on average), or (b) is extremely low and roughly stable. This is because even a very low but stable risk per year can practically guarantee existential catastrophe happens at some point, given a long enough time.[1]
My own one-sentence description of existential security would therefore be: A state where the total existential risk across all time is low, such that humanityâs long-term potential is preserved and protected.[2]
Purpose and epistemic status of this post
This post consists primarily of quotes from The Precipice, along with some commentary. I hope this post can:
- Serve as a summary of the concept of existential security for people who havenât read The Precipice.
- Serve as an online summary that can be linked to.
- Set the stage for my next few planned posts, which are related to the âgrand strategy for humanityâ that Ord presents in The Precipice. Ord summarises his âgrand strategyâ as follows:
I think that at the highest level we should adopt a strategy proceeding in three phases:
- Reaching Existential Security
- The Long Reflection
- Achieving Our Potential
Preserving and protecting our long-term potential
Ord writes that reaching existential security means reaching:
a place of safety - a place where existential risk is low and stays low.
[... This] has two strands. Most obviously, we need to preserve humanityâs potential, extracting ourselves from immediate danger so we donât fail before weâve got our house in order. This includes direct work on the most pressing existential risks and risk factors, as well as near-term changes to our norms and institutions.
But we also need to protect humanityâs potential - to establish lasting safeguards that will defend humanity from dangers over the longterm future, so that it becomes almost impossible to fail. Where preserving our potential is akin to fighting the latest fire, protecting our potential is making changes to ensure that fire will never again pose a serious threat. This will involve major changes to our norms and institutions (giving humanity the prudence and patience we need), as well as ways of increasing our general resilience to catastrophe. This neednât require foreseeing all future risks right now. It is enough if we can set humanity firmly on a course where we will be taking the new risks seriously: managing them successfully right from their onset or sidestepping them entirely.
[...]
Ultimately, existential security is about reducing total existential risk by as many percentage points as possible. Preserving our potential is helping lower the portion of the total risk that we face in the next few decades, while protecting our potential is helping lower the portion that comes over the longer run. We can work on these strands in parallel, devoting some of our efforts to reducing imminent risks and some to building the capabilities, institutions, wisdom and will to ensure that future risks are minimal.
Ordâs distinction between âlower[ing] the portion of the total risk that we face in the next few decadesâ and âlower[ing] the portion that comes over the longer runâ seems useful to me.[3]
Elsewhere, Ord alludes to the same distinction when he writes that we could characterise reaching existential security as requiring that we âAvoid failing immediately & make it impossible to failâ.
Continually declining levels of risk
However, âmake it impossible to failâ seems to be overstating things (presumably in an understandable effort to summarise the essence of the idea). As Ord himself writes:
Note that existential security doesnât require the risk to be brought down to zero. That would be an impossible target, and attempts to achieve it may well be counter-productive. What humanity needs to do is bring this centuryâs risk down to a very low level, then keep gradually reducing it from there as the centuries go on. In this way, even though there may always remain some risk in each century, the total risk over our entire future can be kept small. We could view this as a form of existential sustainability. Futures in which accumulated existential risk is allowed to climb towards 100 percent are unsustainable. So we need to set a strict risk budget over our entire future, parcelling out this non-renewable resource with great care over the generations to come.
He further writes:
A numerical example may help explain this. First, suppose we succeeded in reducing existential risk down to 1% per century and kept it there. This would be an excellent start, but it would have to be supplemented by a commitment to further reduce the risk. Because at 1% per century, we would only have another 100 centuries on average before succumbing to existential catastrophe. This may sound like a long time, but it is just 5% of what weâve survived so far and a tiny fraction of what we should be able to achieve.
In contrast, if we could continually reduce the risk in each century, we neednât inevitably face existential catastrophe. For example, if we were to reduce the chance of extinction by a tenth each successive century (1%, 0.9%, 0.81% . . .), there would be a better than 90% chance that we would never suffer an existential catastrophe, no matter how many centuries passed. For the chance we survive all periods is:
(100% - 1%) Ă (100% - 0.9%) Ă (100% - 0.81%) Ă . . .
â 90.4598%
This means there would be a better than 90% chance we survive until we reach some external insurmountable limit - perhaps the death of the last stars, the decay of all matter into energy, or having achieved everything possible with the resources available to us.
Such a continued reduction in risk may be easier than one would think. If the risks of each century were completely separate from those of the next, this would seem to require striving harder and harder to reduce them as time goes on. But there are actions we can take now that reduce risks across many time periods. For example, building understanding of existential risk and the best strategies for dealing with it; or fostering civilisational prudence and patience; or building institutions to investigate and manage existential risk. Because these actions address risks in subsequent time periods as well, they could lead to a diminishing risk per century, even with a constant amount of effort over time. In addition, there may just be a limited stock of novel anthropogenic risks, such that successive centuries donât keep bringing in new risks to manage. For example, we may reach a technological ceiling, such that we are no longer introducing novel technological risks.
So when Ord writes that existential security is âa place where existential risk is low and stays lowâ, it seems that he means a place where the total existential risk across all time âstays lowâ. He implies that that requires that the risk per unit of time indefinitely trends downwards, rather than merely being brought to a low and stable level.[4]
What about non-declining but extremely low risk levels?
That said, it seems possible that we could achieve low total risk across all time even if risk levels do not indefinitely trend donwards, as long as we reach extremely low risk levels. For example, suppose that, in a trillion years, weâd reach âsome external insurmountable limit [such as] the death of the last starsâ. And suppose we reduce existential risk to 1 in 100 trillion per year, and then donât reduce existential risk any further. In that case, I believe the total risk, across all time, would be around 1%.
Iâm guessing that Ord omitted mention of such possibilities due to:
- A desire for brevity and simplicity;
- Uncertainty about whether or when weâd reach an âexternal insurmountable limitâ; and/or
- It being perhaps hard to imagine bringing existential risk per year down to such extremely low levels
(Alternatively, my reasoning may be flawed.)
Closing remarks
Existential security can be summarised as referring to a state where the total existential risk across all time is low, such that humanityâs long-term potential is preserved and protected. Moving towards such a state seems highly valuable and urgent. Indeed, the first phase of Ordâs grand strategy for humanity consists of reaching existential security.
I plan to soon publish a series of posts which build on this one by discussing:
- What types of futures remain possible if we reach existential security, including futures in which humanity does not â[fulfil] its potential: achieving something close to the best future open to usâ
- How likely it is that humanity will achieve its potential, as long as existential security is reached
- Arguments for and against Ordâs grand strategy
- A typology of strategies for influencing the future
This post is related to my work with Convergence Analysis. My thanks to David Kristoffersson for useful comments on an earlier draft.
This is one of a series of posts Iâve written or plan to write that summarise, comment on, or take inspiration from parts of The Precipice. You can find a list of all such posts here.
We could also choose to focus on the risk per any other unit of time (e.g., century). âŠď¸
Shortly after the release of The Precipice, an alternative (though related) meaning of the term âexistential securityâ was introduced in a paper by Nathan Alexander Sears. Sears uses the term to refer to âa new framework for security policy [...] that puts the survival of humanity at its coreâ. That meaning of âexistential securityâ is not the focus of this post. âŠď¸
That said, using the terms âpreservingâ and âprotectingâ to refer to those two concepts, in that order, doesnât seem intuitive to me. âŠď¸
But note that it is only necessary for the risk per unit of time to tend to decline over time, not for the risk per unit of time to decline at every single time step. That is, it could be possible to have reached existential security even if existential risk sometimes ticks upwards slightly, as long as the overall trend is downward. âŠď¸
MichaelA @ 2020-09-01T09:44 (+4)
Unimportant bonus info about the history of the term/concept âexistential securityâ, and of this post:
It seems that concepts corresponding to what Ord calls âexistential securityâ, or something similar, had been discussed under various names by various authors for several years prior to the release of The Precipice. But there didnât seem to be any really detailed discussion of the concept until The Precipice.
And the term âexistential securityâ had almost never been used for this concept, based on the first two pages of results when I googled ââexistential securityâ âexistential riskââ in February 2020 (before The Precipice was released). The only really relevant result was Will MacAskill, in a 2018 podcast interview, saying âThe first [stage] is to reduce extinction risks down basically to zero, put us a position of kind of existential securityâ. Most results were just things calling climate change an âexistential security riskâ.
I was doing that googling in February because I was pretty sure Iâd heard of this concept, but I couldnât find any proper write-up on it, and thus decided I might write a post about it. I was intending to use the term âexistential securityâ, but to also suggest the terms âexistential safetyâ and âexistential stabilityâ as options. But then I decided that, as The Precipice would be released a month later, Iâd hold off till I read that, in case Ord discussed this idea.
And indeed, it turned out Ord discussed this concept thoroughly and well, and using the term Iâd been leaning towards.[1]
But there was no summary of Ordâs conceptualisation of existential security on the EA Forum or LessWrong. So I decided to adapt my draft into such a summary, as well as a discussion of how this concept relates to other terms and concepts. And then I later abandoned the idea of comparing the concept to other terms and concepts, though you can find my unpolished notes on that here.
[1] Iâm unsure whether this is a result of:
- me independently converging on the same idea (perhaps primed by MacAskill's one mention of the term), or
- the idea having been occasionally discussed verbally in ways that reached me - but that Iâve since forgotten - despite the idea having not been on the internet yet.