Interstellar travel will probably doom the long-term future
By JordanStone @ 2025-06-18T11:34 (+125)
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.
Introduction
I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.
Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible.
Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.
In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the universe can be attained as possible. X-risks that would destroy a spacefaring civilisation are important considerations for forward-thinking planning related to long-term resilience. Our current activities, like developing AGI and expanding into space may have a large foundational impact on the long-term trajectory of human civilisation.
So I've investigated x-risks with the capacity to destroy a galactic civilisation[4] ("galactic x-risks"), defined here as an event capable of destroying the long-term potential of an arbitrarily large spacefaring civilisation[5]. A galactic x-risk would be a huge moral catastrophe. First, that's a lot of death. Second, once a civilisation has overcome the barriers required to become a galactic civilisation, their long-term potential in expectation is probably much higher than a civilisation on one planet (i.e., a pre-Precipice or pre-Great Filter civilisation). So, the loss of future value may be much greater[6].
Existential risks to a Galactic Civilisation
I'll start by ruling out the risks that I believe are limited in scope to a one planet civilisation, and then to a small spacefaring civilisation. Don't worry too much about the categories/subheadings here. These threats could be in multiple categories depending on their severity, and, if artificial, their design and deployment strategy[7]. The threats in the "Galactic Existential Risks" section will be the most relevant to the long-term resilience of a galactic civilisation.
I have not classified existential risks into artificial and natural because we’re dealing with arbitrarily advanced technology. So any natural event that we know can occur, could conceivably be induced by arbitrarily advanced technology, or at least the probability of its occurrence could be increased artificially.
Threats Limited to a One Planet Civilisation
The threats we will escape by becoming spacefaring are those that drastically change a planet’s atmospheric conditions without spreading to space. Included in this group are volcanic eruptions[8], geoengineering disasters[9], atmospheric positive feedback loops[10], the release of highly toxic molecules/chemical warfare[11], nuclear war[12], and an asteroid or comet impact[13]. By spreading outside of one’s own atmosphere, other settlements can survive and hopefully safeguard the long-term potential of human civilisation[14].
Threats to a small Spacefaring Civilisation
Some existential risks can propagate through space and potentially destroy civilisations across multiple star systems. In contrast, threats that are not self-propagating or are limited in range can eventually be overcome by spreading throughout the galaxy. These threats, therefore, only pose existential risks during a relatively brief phase in the expansion of human civilisation in space.
The longest list of threats in this transitionary phase come from stars. Firstly, a civilisation's host star may be the source of x-risks like superflares[15], stellar mass loss events[16], increasing stellar luminosity or volume, or stellar engineering disasters[17]. Alternatively, localised existential risks could come from natural energetic events of other stars[18], like supernova explosions, magnetar flares, pulsar beams, kilonova winds[19], and M-dwarf megaflares. These energetic stellar events would affect multiple star systems, with a range of impacts depending on the type of event and its distance.
There are also all sorts of cosmic phenomena that could be encountered by a civilisation occupying one or multiple solar systems (threat-dependent) that would likely destroy them. These include wormholes[20], primordial black holes[21], cosmic strings[22], domain walls[23], interstellar clouds[24], and supernovae remnants[25].
Alternatively, any artificial or natural[26] alterations to the orbits of planets could conceivably destroy a spacefaring civilisation around one star by altering UV radiation influx or inadvertently causing planet collisions[27].
Giant artificial structures in space like Dyson spheres, orbital rings, sunshades, Shkadov thrusters, or space particle accelerators could conceivably end a spacefaring civilisation too. Their immense energy demands, gravitational influence, or stored kinetic potential makes them capable of causing catastrophic failure modes, like stellar destabilisation, orbital cascade, or directed energy misfires[28]. Additionally, interstellar weapons like high powered lasers and particle beams may be used to annihilate a civilisation occupying a solar system from very far away[29].
The most devastating localised existential risks are galactic core explosions and quasars[30]. A quasar could generate a galactic superwind from the center of a galaxy, stripping away atmospheres as it expands, sterilising large regions of the inner galaxy.
Galactic Existential Risks
This is a list of threats that could conceivably destroy the long-term potential of a whole galactic civilisation, or plausibly even a civilisation occupying multiple galaxies. Preventing any of these from occurring[31] is essential to the long-term resilience of a galactic civilisation.
Self-replicating machines
If you’re worried about alien invasions, then von Neumann probes are a great bet. If they're advanced enough, they can self-replicate indefinitely, travel at near light speed, detect life throughout the galaxy and systematically eliminate it. It has been suggested that von Neumann probes could destroy a galactic civilisation[32]:
all it takes is one civilization of alien ass-hat griefers who send out just one von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare).
I think the key here is self-replication. If technologically feasible, self-replicating interstellar lasers are much worse than interstellar lasers. Presumably, to annihilate a galactic civilisation, the creator of the von Neumann probe would also be eliminating themself. But I think that releasing von Neumann probes that kill most civilisations and prevent others from emerging is sufficient to destroy most of the long-term potential of a galactic civilisation[33]. In the words of Charlie Stross, “it only has to happen once, and it f***s everybody".
I'm also including nanotechnology in the self-replicating machines section. There are definitely motivations to decrease the size of a von Neumann probe. If the probes are adaptive, then nanotechnology might be an effective strategy at times. So the same overview above applies to nanotechnology.
Biological machines (i.e., engineered pandemics) are also self-replicating machines, but are probably not an effective strategy for deployment in space. One could argue that weaponised bioengineered pandemics could be a threat to a spacefaring civilisation if they were adaptive[34], engineered for maximum spread and lethality, and deployed artificially across the galaxy[35].
Strange matter
Strange matter has been theorised to exist as a more stable form of matter, and it may be able to turn normal matter into strange matter on contact. Strange matter is predicted to form when atoms are compressed beyond a critical density, dissociating the protons and neutrons in quarks, creating quark matter and strange matter. These conditions might be reached during a collision between two neutron stars, and strangelets (small droplets of strange matter) may be released. Alternatively, extremely advanced technology may be used to create strange matter in the long-term future.
If a strangelet hit the Earth, we'd all be converted into strange matter, and strangelets would be created and dispersed. Eventually this process of conversion and spread could destroy a galactic civilisation.
Strange matter is probably a solvable risk for an extremely advanced civilisation. With huge amounts of energy, they may be able to create gravitational waves to act as a repulsor beam. But still, a huge amount of long-term value in the universe may be lost if the strange matter is not contained rapidly.
Vacuum decay
Vacuum decay is a hypothetical scenario in which a more stable vacuum state exists than our current "false vacuum". Calculations indicate that the probability of spontaneous vacuum decay is exceedingly low on cosmological timescales. However, local decay might occur if enough energy is concentrated in a small volume, or the fundamental fields are otherwise manipulated into a configuration that relaxes to the true vacuum rather than to our metastable vacuum. To quote Nick Bostrom[36]:
This would result in an expanding bubble of total destruction that would sweep through the galaxy and beyond at the speed of light, tearing all matter apart as it proceeds.
Developments in the standard model of particle physics (especially up to high energy scales) should eventually tell us whether or not vacuum decay is possible. If it is, then we might not have enough time to create a galactic civilisation anyway.
Subatomic Particle Decay
Very speculative grand unified theories in the 1970s (e.g. SU(5)[37] and SU(10)) implied that subatomic particles can decay[38], meaning all matter is ultimately unstable. For example, the proton may decay via an interaction with a magnetic monopole[39] or via pathways involving virtual black holes and hawking radiation[40]. Additionally, neutrons may decay[41] via pathways like neutron-antineutron oscillations or by leaking into other dimensions[42]. However, these processes mainly act locally, so they are unlikely to be galaxy-ending scenarios unless they were self-propagating reactions[43] or another fundamental process altered physics (either artificially or naturally over time) to allow the decay to occur across the universe[44].
Time travel
In general relativity, if it's possible to create spacetime warping structures like cosmic strings and wormholes, then time travel back in time is not definitely impossible. There are some rebuttals like chronology protection from quantum physics and the chronology protection conjecture. But, generally, it seems impossible to know whether time travel is actually possible until a unifying theory to join quantum gravity with quantum mechanics and general relativity has been developed[45]. If time travel is possible, then (depending on your personal favourite theory of time travel) a galactic civilisation might be destroyed by itself before it is created.
Fundamental Physics Alterations
There are so many different fundamental constants and properties that are highly precise and necessary to the existence of life. With extremely developed physics and arbitrarily advanced technology, if any of them are possible to alter at a large scale, then life in the universe is at risk. Additionally, it has been suggested that fundamental constants may drift over time in accordance with the age of the universe. The fundamental constants include the gravitational constant[46], strong coupling constant[47], Planck’s constant[48], the cosmological constant[49], and the fine structure constant[50]. Other stuff breaking down that would be very bad include color confinement[51], quark-leption unification[52], and the speed of light.
Interactions with Other Universes
If there are other universes or our universe is a sub-universe[53], then everything could end quite abruptly.
According to brane theory, other universes exist in non-visible dimensions, but are able to collide with our universe (brane collision)[54]. One of these collisions might have ignited the Big Bang. If a Brane collided with our universe again, it could initiate another Big Bang-like event and reset our universe.
Based on the eternal inflation theory, our universe is one of many bubbles that form through quantum tunnelling within an inflating space. Bubbles expand at near-light speed and may collide with each other, causing observable effects or even catastrophic consequences. A collision with another bubble could even initiate vacuum decay or create domain walls.
Alternatively, our universe could actually exist in the black hole of another universe. Black holes contain spacetime singularities, which break down everything we know about space and time, so space and time may switch roles. From the outside, a black hole is infinitely large in time (i.e., it may continue existing almost indefinitely), but from the inside it is infinitely large in space. That means a black hole could contain a whole universe... our universe. This might not actually change much about our understanding of the universe. However, the collapse of a black hole universe into a singularity would probably end our universe in a similar way to the Big Crunch[55]. Alternatively, our host black hole might just evaporate slowly by releasing hawking radiation, this would normally take 1067 years, far longer than the current age of our universe (1.38 x 1010).
Finally, (I say in a slow rhythmic voice) if the universe is a simulation and the simulation was turned off, a civilisation at any scale in our universe would be destroyed.
Societal Collapse or Loss of Value
Galactic civilisations may fizzle out due to societal collapse or loss of their moral value. Societal collapse may occur via predator prey dynamics[56], galactic tyranny, weakness by homogeneity[57], or an interstellar coordination breakdown. A loss of moral value might come about by an outcompeting of sentient beings by non-sentient posthumans or artificial intelligence, the creation of s-risks, or a gradual drift of values.
Artificial Superintelligence
Artificial superintelligence can use all of the above to kill a galactic civilisation.
I think the creation of superintelligent AI may always be an x-risk, even if alignment has been solved previously somewhere else in the universe.
While war has typically favoured the defender, destructive capabilities have always been more powerful than defensive or constructive capabilities. While countries have been created over thousands of years, any country on Earth could be destroyed within the next hour by nuclear weapons. So I would assume that, at the limits of technology, destructive capabilities will eventually be so great as to defeat any defences[58], vacuum decay being the prime example.
So even if an aligned superintelligence exists in a galactic civilisation. Then assuming new civilisations are able to emerge independently of it, the creation of an evil superintelligence somewhere else is probably still an x-risk as its destructive nature would make it innately more powerful than the benevolent AI. Especially if the evil AI has no interest in self-preservation (e.g., it was created to eliminate astronomical suffering), it could initiate any number of x-risks that there is no defence against, like vacuum decay, N-D lasers, von Neumann probes equipped with N-D lasers, strange matter, or likely many more things that I couldn't even conceive of.
One potential solution to prevent an evil superintelligence from emerging is to have an aligned superintelligence that is able to observe all activities across a galaxy (likely requiring independently acting but coordinated superintelligences to mitigate communication delays). This way, the emergence of a new superintelligence could be prevented, along with all the other galactic x-risks on the list. More on this in the last section as there are obvious downsides to address.
Conflict with alien intelligence
There are essentially infinite ways an alien civilisation with arbitrarily advanced technology might choose to eliminate a galactic civilisation[59]. Just pick any item on the galactic existential risks list and insert before it "aliens decide to use...", or pick anything in the previous section and insert "aliens decide to use ____ millions of times".
I think aliens are more dangerous than superintelligence, firstly, because aliens could create superintelligence. Secondly, we may have no control over their creation or what they might do. They could just initiate vacuum decay right now and we wouldn't even see it coming as it approaches at light speed. More on aliens in the very uplifting section entitled: "If aliens exist, there is no long-term future".
Unknowns
Assuming that humanity right now is nowhere near the limits of technological and scientific knowledge, a post-ASI civilisation could expand the number of galactic x-risks drastically. If you asked a 21st-century PhD student (me) and a superintelligent AI to list all possible galaxy-ending existential risks, what percentage of the AI's total list would the PhD student be able to name? Asking us to list galaxy-ending x-risks today might be like asking an ancient Roman to predict how artificial intelligence could collapse 21st-century civilisation - the concepts simply don’t exist yet. I wouldn't be so bold as to claim I could name more than 50% of the galactic x-risks.
Figuring out how many unknown unkowns there are is really hard. The sample size of galactic x-risks isn't really big enough to say anything confidently about a rate of discovery, especially since I don't know which are actually real. So I've listed 100 cosmic threats[60] (anything arising from space that could destroy Earth as a proxy for galactic existential risks), summed up how many were discovered in each decade, and plotted that on a bar graph[61]:
Aside from this just being super interesting, there are signs we may be reaching diminishing returns - it looks like we've passed the peak of a bell curve[62]. However, unexpected discovery classes remain possible, and the discovery rate may go up rapidly if there is an intelligence explosion. How high is pure speculation I think. There is so much that we know we don't know about the universe, so there must be so much that we don't know we don't know. But playing it safe, I'd assume that I've listed less than 50% of the actual number of galactic x-risks, and we should probably act as if we're certain that there are many galactic x-risks.
What is the probability that galactic x-risks I listed are actually possible?
For an x-risk on the list to actually be a galactic x-risk, it must be real and capable of destroying a galactic civilisation. Here are my best guesses for each of the galactic x-risks[63]. For each, I have given my percentage probability of the threat actually being a thing that could happen in the long-term future (i.e., the laws of physics permit it's existence) and of the threat being capable of ending a galactic civilisation if it was initiated. Threats resolving 100% on either get a tick.
Galactic x-risk | Is it possible? | Would it end Galactic civ? |
Self-replicating machines | 100% | ✅ | 75% | ❌ |
Strange matter | 20%[64] | ❌ | 80% | ❌ |
Vacuum decay | 50%[65] | ❌ | 100% | ✅ |
Subatomic particle decay | 10%[64] | ❌ | 100% |✅ |
Time travel | 10%[64] | ❌ | 50% | ❌ |
Fundamental Physics Alterations | 10%[64] | ❌ | 100% | ✅ |
Interactions with other universes | 10%[64] | ❌ | 100% | ✅ |
Societal collapse or loss of value | 10% | ❌ | 100% | ✅ |
Artificial superintelligence | 100% | ✅ | 80% | ❌ |
Conflict with alien intelligence | 75% | ❌ | 90% | ❌ |
Reassuringly, none of these have two ticks in my estimation. However, combined, I think this list represents a threat that is extremely likely to be real and capable of ending a galactic civilisation.
What is the probability that an x-risk will occur?
What are the factors?
There are multiple factors that simultaneously increase and decrease x-risk as a civilisation expands in space:
- Dispersion: As a colony expands into space, it becomes more dispersed, which reduces the probability that a single x-risk will destroy everyone. However, dispersion across interstellar space makes effective governance increasingly challenging due to the huge communication time lags[66].
- Population size: Similar to dispersion, the more sentient beings that exist, the higher the probability of survival. But the higher the probability that one of those individuals or civilisations will cause an x-risk - it only takes one to initiate a galactic x-risk.
- Resource availability: As we expand into space, we are able to take advantage of more and more energy to simultaneously create much greater weapons and defences against x-risks.
Cumulative Chances
The initiation of a galactic x-risk could be motivated by a desire to end ongoing astronomical suffering across a galaxy, which I think is the most likely scenario. But generally, I think it doesn't really matter what motivations there are, because even very small probabilities of an event occurring can become extremely large given enough time. Assuming that galactic x-risks are possible (i.e., they exist and can destroy a galactic civilisation), cumulative chances over time and space make the probability of a galactic x-risk occurring nearly 100%.
If an event has an n > 0 probability of occurring over a period of time P(t), then the probability of that event having occurred over increasingly long time periods will eventually approach 100% (the green line in the graph). Additionally, the probability of an event occurring increases proportionally to the number of actors who are capable of initiating that event (as indicated by the other two lines in the graph). This graph is adapted from a talk that Toby Ord gave at EAG London 2025 on forecasting over long time periods:
So, for a galactic civilisation[67], even if the probability of a star system inducing a galactic x-risk is very low, the probability that a galactic x-risk will eventually be triggered is effectively 100%. This poses a potentially enormous threat to the total value that human civilisation could gain from the universe.
If aliens exist, there is no long-term future
I hinted at this in the 'conflict with alien intelligence' section. I have suggested that galactic x-risks are real and could destroy a spacefaring civilisation of arbitrarily large size. I then argued that the probability of one of those risks occurring is extremely high. So if aliens exist at the same time as us (especially under a grabby aliens scenario), then the probability that they will initiate an x-risk that would affect us is also very high, whether intentionally or not. There might already be vacuum decay bubbles headed our way, so human civilisation would end even if we did everything right. In addition to concerns I'm raising about the resilience of our future galactic civilisation, I think aliens have big implications for discussions around big picture cause prioritisation[68].
If alien-induced galactic x-risks have a non-zero probability of occurring, it's plausible that interstellar civilisations facing similar existential risks might develop coordination mechanisms or mutual deterrence strategies over time. And we shouldn’t rule out the possibility that sufficiently advanced alien civilisations could be benevolent or even value-aligned in ways that reduce risk rather than exacerbate it. But the risk is very high if they will emerge and spread independently of us - the cumulative probabilities graph above is multiplied and out of our control.
The Way Forward
From hereon I will assume we are the only intelligent life in the universe at the moment. What's our best path?
The governance systems of a galactic civilisation would have to be amazing to prevent some of these galactic x-risks. This problem might not be solved or be given sufficient attention within the next few decades. However, once an interstellar mission is sent (Metaculus predicts 2116 (25%), 2248 (median), >2500 (75%)[69]), a self-perpetuating expansion of humanity throughout the cosmos is plausibly initiated. Taking time for a long reflection before initiating such an expansion seems important for increasing the chances of exporting a viable galactic governance system - one capable of sustaining a flourishing civilisation that can endure and spread across the galaxy for billions of years.
One solution is to abandon galactic colonisation and only expand in the digital space, using the real world exclusively for gathering resources[70]. But if we choose galactic colonisation, then a governance system would need to meet the following requirements in order to prevent galactic x-risks:
- The governance system should be able to spread ahead of (or at least with) human civilisation throughout the affectable universe.
- The governance system should be impossible to overthrow, manipulate, sidestep, or avoid.
- All potential routes to the creation of a galactic x-risk or astronomical suffering should be known by the governance system.
- Any activities pertaining to potential routes to galactic x-risks or astronomical suffering should be observable by the governance system.
- The governance system should hold or have access to the powers to prevent any actor from initiating a galactic x-risk or astronomical suffering.
- The governance system should be able to identify emerging alien civilisations and integrate itself into them or collaborate with them to continue meeting the above requirements.
It's worth noting that conditions 3 to 5 basically describe God - all knowing and all powerful. So it might be necessary to create a God-like being, or at least a superintelligence, prior to interstellar colonisation. The amount of foresight required to meet these conditions is inconceivable without a superintelligent system that is able to adapt to emerging hazards (or fundamentally unknowable future scenarios like other universes interacting with ours). It seems impossibly hard to predict what any one of hundreds of millions of star systems developing across a galaxy for billions of years might do. The good news is that we are in a very powerful position right now, especially if we are alone in the universe. We could prevent any star system in the future from gaining more power than their governance system. This advantage is lost if humanity spreads to other star systems soon and allows them to culturally drift or remove themselves from a centralised governance system.
Another point to mention from the requirements is that I've included the prevention of astronomical suffering. Inadvertently locking human civilisation in a state of astronomical suffering with no way out is literally the worst possible scenario. There are situations where the initiation of a galactic x-risk may be morally preferable. So, to avoid galactic x-risks, astronomical suffering should necessarily be prevented. There may even be an option to include a back door to allow galactic x-risks if the governance system fails to prevent astronomical suffering.
This 'governance system' I'm describing has very authoritarian vibes. Of course, pairing this with a long reflection may allow us to solve issues around freedom and governance associated with an AI overlord, or find alternative solutions. Or maybe we could become so wise that the probability of us or any of our descendants initiating a galactic x-risk is 0%.
One could argue that superintelligent AI will solve these problems, so we don't need to bother now - it would be best to focus on AI alignment. However, interstellar missions may come before aligned superintelligence. In fact, humanity would likely be able to launch an operation to colonise the entire reachable universe quite easily if we wanted to. So an interstellar mission may have to be actively prevented until we can be certain that its accompanying governance structure would either prevent further cosmic propagation or future galactic x-risks.
However, the point of no return might not be "an interstellar mission is launched". If we launch an interstellar mission in the next couple of decades, it would likely not reach anywhere near light speed and may take decades to reach its destination. So future missions with more advanced technology and faster spacecraft could likely catch up to it, even if they leave many years later. Additionally, the first interstellar settlers probably wouldn't be thinking about moving on to the next star system as soon as possible. Or, at least they probably wouldn't spread as quickly as they possibly could. So, a mission to catch up with them post-transformative AI and invite them to participate in our "governance system" would likely be successful[71]. This pathway has bad vibes though, and other star systems could be defence-dominant in the short-term[72]. In any case, I think a lot more work on investigating current plans and long-term scenarios for interstellar travel now is well justified[73].
Some key takeaways and hot takes to disagree with me on
- Existential risks capable of destroying a galactic civilisation (galactic x-risks) are possible.
- A galactic x-risk will inevitably occur even if the probability of a star system initiating one is extremely low.
- In the long-term future, conflicts between star systems will be offence-dominant.
- If aliens exist, there is no long-term future (there is no galactic civilisation lasting billions of years).
- There are 6 requirements for a governance system to prevent galactic x-risks, and they suggest the creation of God.
- Interstellar travel should be banned until galactic x-risks and galactic governance are solved.
- Galactic x-risks are relevant to big picture cause prioritisation.
- At least some of us should care about all of the above points now, rather than in 30 years or after ASI.
Edit: Adding some further reading as a lot of these great works are hidden within all the footnotes (or were missed originally):
- Daniel Deudney. 2020. Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity (space expansion increases x-risk)
- Phil Torres. 2018. Space colonization and suffering risks: Reassessing the “maxipok rule". Futures 100 (2018): 74-85. (galactic colonisation will lower existential security)
- Toby Ord. 2025. Forecasting can get easier over longer timeframes. EAG London 2025. (Toby had clearly thought about my whole post before I wrote it)
- Charlie Stross. 2015. On the Great Filter, existential threats, and griefers. (correlated galactic x-risks: "all it takes is one")
Acknowledgements: Thanks to Jess Riedel and Toby Ord for comments on the draft. I made a lot of changes since their comments and all mistakes remain my own.
- ^
These solar system x-risks were unlikely to be future great filters as they mainly required extremely advanced technology (e.g. stellar engineering), far more advanced than interstellar travel. So, spacefaring civilisations would most likely be interstellar before any of these large-scale sci-fi disasters occurred and so they don’t represent a reliable Great Filter.
- ^
It's a combination of both these factors. A huge population with a lot of different civilisations that will be independent actors because of the huge space between them. It means that they don't act as one civilisation that could do a bad thing, but potentially millions of civilisations that could do a bad thing. In the latter scenario, the bad thing has a much higher probability of occurring... it only takes one.
- ^
Whatever that might mean. I imagine wellbeing of sentient beings + number of sentient beings.
- ^
I am aiming to include very speculative or hypothetical threats that might emerge as real risks in the long-term future.
- ^
I don't think there's a good word for a civilisation spanning a huge amount of space. Galactic x-risk has a good ring to it, but most of the threats I'll discuss don't care about how big your civilisation is. "Universal x-risk" isn't as clear and "cosmic x-risk" sounds too much like cosmic threats.
This definition is, of course, based on the definition of x-risk used by Toby Ord in The Precipice.
- ^
Assuming that there isn't a near limit on the amount of time that the cosmos will be conducive to the existence of life.
- ^
Basically anything that can destroy a civilisation occupying a star system is potentially a galactic x-risk if it's used millions of times.
- ^
Large magnitude eruptions may dramatically cool our climate through the release of ash and sulfur, or warm the planet from significant releases of CO2. But other impacts include:
ozone destruction, reduction in rainfall, further biodiversity collapse/environmental damage (already in decline), widespread blockages of maritime trade, destruction of communication, and technology, financial losses, may lead to mass shortages of food, water and energy resources
All of these impacts combined are certainly a global catastrophic risk, and may be sufficient to cause the collapse of a global civilisation and the loss of its long-term potential.
No volcano in the Solar System is powerful enough to affect the habitability of multiple planets. However, a related but highly speculative scenario might be the detonation of a gas giant planet by the nuclear fusion of deuterium (e.g. of Jupiter). This may be initiated by a black hole impacting the planet or by the deployment of an initiator bomb[74]. The ignition of a gas giant planet may create an explosion so great that a whole solar system would be affected by it. But I'm hard-pressed to see how even this stretch of the imagination could be an existential threat to a whole spacefaring civilisation.
- ^
A geoengineering disaster may be initiated with a desire to cool the atmosphere of a planet, potentially by use of aerosols. It’s clear to see how this could go wrong. The best mitigation for this is to not do it, or make sure that if geoengineering is necessary, it is executed extremely responsibly.
The equivalent of this for a Solar System civilisation and a galactic civilisation are a stellar engineering disaster and a galactic core engineering disaster, respectively. Or maybe a planetary core engineering disaster is the more accurate planetary equivalent. Maybe. Moving on.
- ^
Including the runaway greenhouse effect and runaway refrigerator effect.
An artificially induced runaway greenhouse effect could occur if large-scale activity released vast quantities of greenhouse gases into the atmosphere. As heat becomes trapped more effectively, surface temperatures would escalate, causing oceans to evaporate and the water vapor to amplify the greenhouse effect further. This positive feedback loop could render the planet uninhabitable (Venus’ fate), collapsing ecosystems, infrastructure, and ultimately ending civilization. This is not even remotely a scenario for the present-day climate crisis.
There's also the "runaway refrigerator effect", where a positive feedback loop emerges that drives temperatures down[75]. This effect has been responsible for past ice ages on Earth and Mars.
- ^
The release of large quantities of highly toxic molecules into the atmosphere (e.g., through overenthusiastic chemical warfare) may destroy the long-term potential of human civilisation by eradicating the vast majority of the population. Most space outside of Earth's atmosphere is essentially toxic to humans anyway, so this isn't a big deal for a spacefaring civilisation.
- ^
Apart from the initial destruction, a nuclear war may generate a nuclear winter, reducing the solar influx and generating global sub-freezing temperatures. This is very likely to destroy the long-term potential of a global civilisation.
Nukes do work in space, but a spacefaring civilisation is not as vulnerable to this because firing missiles at every space station, moon, and planet to destroy everyone would be very difficult. A self-sustaining colony capable of preserving the civilisation’s long-term potential is very likely to survive. Other more powerful interstellar weapons are discussed later.
- ^
An asteroid hitting a planet has a proven capability to cause mass extinctions and a large impact could potentially destroy human civilisation.
By expanding to other planets, moons, and space stations, we prevent all of humanity from being wiped out by one impact.
Some extreme scenarios like a strong gravitational wave from a nearby and massive cataclysmic event like a black hole merger (not gonna happen[76]) creating a large scale instability in the asteroid belt and Oort cloud could result in chaos that might destroy a spacefaring civilisation. But in those speculative scenarios, the asteroid impacts are more like an effect than the existential risk itself.
- ^
I don't think this is a strong motivation for rapid space expansion. Almost none of those scenarios make Earth less habitable than Mars. But this will change in the long-term future once space settlements are more established, large, and self-sustaining. To me, patience with space expansion seems like the path that leads to the best existential security in the long-term. We should aim to increase existential security on Earth first before we export our fragile and unsustainable society to other planets where the challenges will be greater.
- ^
Solar flares and coronal mass ejections from stars are fairly common, and they can damage infrastructure like electrical grids and satellites.
Some have suggested that extremely powerful solar flares might also occur, so-called “superflares”. If the superflare were powerful enough, it may be capable of destroying a spacefaring civilisation by altering the surface temperatures of planets or destroying ozone layers. Humans living in space colonies may be more vulnerable to super flares if they exist outside the protection of an atmosphere. An advanced civilisation might easily mitigate the effects of superflares by predicting them in advance and using radiation shielding, or even constructing large protective structures in space.
- ^
A sudden stellar mass loss event would reduce the luminosity of the Sun. These are more common in the later stages of a star's life (and for larger stars). A mass loss event could be very destructive to a spacefaring civilisation was taken off guard. A sudden stellar mass loss event may also have other destructive consequences from the ejection of plasma and particles. However, in general, this would be predicted quite easily, is probably not a plausible scenario for the Sun, and the catastrophic consequences may be prevented with solar shields in space, stellar engineering, geoengineering, or planetary orbit alteration. A similar argument also applies to the increasing luminosity of the sun over billions of years.
- ^
The Sun may also be damaged or altered by a stellar engineering project, such as an attempt to extend the lifetime of a star or extract material, with resulting (presumably) unanticipated and destructive consequences. Additionally, an attempt to turn Jupiter into a star (e.g. to terraform the Galilean moons) may destroy Jupiter and wipe out a spacefaring civilisation. The proposed method for this was to seed Jupiter with a primordial black hole. Sounds safe.
- ^
These events have the capability to affect large regions of space. The effectors include microwave radiation, x-rays, gamma-ray bursts, cosmic radiation and particles, and neutrino showers. Space colonies may be more vulnerable to stellar explosions if they exist outside the protection of an atmosphere.
- ^
(neutron star mergers)
- ^
Some quantum gravity models allow for the spontaneous temporary formation of microscopic or macroscopic wormholes, which would quickly collapse, potentially into a black hole. If a wormhole were to spontaneously form near or within our Solar System and then collapse abruptly, it might produce extreme gravitational disturbances or tear spacetime locally.
A wormhole could also be created artificially. It seems that there are multiple potential pathways to creating stable wormholes under various assumptions and theories, and its plausibility may be clearer with a theory of everything. Quoting Anders Sandberg on wormholes:
dump one end in the Sun and another elsewhere (a la Stephen Baxter’s Ring), and you might drain the Sun and/or irradiate the Solar System if it is large enough.
- ^
Primordial black holes are a theoretical type of black hole that may have formed shortly after the Big Bang. The universe was very dense then, so black holes were forming everywhere and at a range of masses. So there could still be football-sized black holes lurking about the universe. If one were to hit us at high speed, it could destroy a planet.
While physicists predict that primordial black holes would move very quickly, it's a very different story if the primordial black hole approaches slowly. In that case it could settle in a planet’s core, which it would devour from the inside. Alternatively, if primordial black holes evaporate nearby, their intense burst of radiation during final stages could cause significant damage locally.
So it seems that these primordial black holes could get up to all sorts of mischief that may destroy a civilisation spread across a solar system. Though, primordial black holes remain highly theoretical and their impacts are entirely speculative.
- ^
Cosmic strings are one-dimensional defects or cracks in spacetime that stretch for potentially millions of lightyears. They are thought to have formed from the trapping of energy into strings during early universe phase transitions. They are hypothetical but are predicted from the Big Bang and may be detectable. Simulations show that a string crossing Earth would induce "global oscillations" (like an earthquake (non-catastrophically) ringing the whole planet). Depending on the density of the strings, this could conceivably shatter Earth or the Sun. There are probably a bunch of other lethal scenarios, including but not limited to bursts of high energy particles, intense gravitational waves, and distortions of spacetime.
- ^
The early universe likely underwent multiple phase transitions, during which the fundamental fields of physics changed. Domain walls could have formed when different regions of the universe settled into different vacuum states, creating two-dimensional topological defects at the boundaries between them. But it doesn't seem that they would be stable until the present because they would be so deadly that if they were formed during the early universe, they probably would have destroyed the universe by now. But if they were stable, and we passed through one, a spacefaring civilisation would be at the mercy of whole new fundamental physics, which isn't good for biochemistry.
- ^
Interstellar clouds dominated by gas may enrich atmospheres in unusual elements, or interstellar dust clouds could generate meteor showers and destroy satellites and space stations.
Interstellar clouds would usually be a mix of gas and dust.
- ^
Supernova remnants are basically a type of interstellar (plasma) cloud.
Supernova explosions are one of the most explosive events in our universe, and they leave behind highly energetic remnants. Particularly dangerous remnants are left when pulsars go supernova, as they can contain pulsar wind nebulae, which are some of the most energetic objects in our universe (though only dangerous at short distances). If a solar system were to pass through a supernovae remnant (not gonna happen to us), then chaos would ensue for the civilisation that inhabited it.
- ^
A rogue planet entering our solar system may naturally produce these catastrophic scenarios, throwing orbits into chaos. NASA estimates that there are far more rogue planets in our galaxy than planets orbiting stars. This risk is also very conducive to extremely unlikely imaginative scenarios, like how does that change if a gas giant instead of a rocky planet enters the Solar System? What about a “rogue” black hole (or maybe even a primordial black hole)?
- ^
For this to happen accidentally, there would have to be some extreme oversight. If you have the power to move planets, you can do the maths. Also, moving planets wouldn't be the most subtle military strategy in an interplanetary conflict. So I think the natural scenario (i.e., a rogue planet) is most likely.
- ^
More info: https://gizmodo.com/12-ways-humanity-could-destroy-the-entire-solar-system-1696825692
Their strategic value would probably make them targets in conflict, or their control could be seized by rogue actors or misaligned AI systems, leading to civilisation-wide collapse. So a spacefaring civilisation would need to incorporate very well-thought out safeguards and protections into the design of a giant structure.
- ^
Dispersion throughout a galaxy likely counters this threat. Kurzgesagt made a great video on interstellar war featuring interstellar weapons.
Though, interstellar weapons are the kings of "use it a bunch of times and it's a galactic x-risk". I'm assuming this comes under "self-replicating machines" though. Don't worry too much about the categories.
- ^
Quasars form around black holes like the one at the centre of our galaxy:
The inflow of gas into the black hole releases a tremendous amount of energy, and a quasar is born. The power output of the quasar dwarfs that of the surrounding galaxy and expels gas from the galaxy in what has been termed a galactic superwind
This "superwind" drives all gas away from the inner galaxy. Most galaxies have already gone through a quasar phase as they were common in the early universe.
Another potential threat from the centres of galaxies are cosmic rays from galactic core explosions. Hundreds of thousands of times more damaging than supernova explosions, some theorise that cosmic rays from other galaxies have already caused mass extinctions on Earth. Explosions from the core of the Milky Way likely make the inner galaxy uninhabitable, so large explosions may have catastrophic impacts for a galactic civilisation.
- ^
Or, at least, all of the ones that turn out to actually be real and capable of destroying a galactic civilisation.
- ^
Say we send out a fleet of exponentially self-replicating von Neumann probes to colonize the Galaxy. Assuming they’re programmed very, very poorly, or somebody deliberately creates an evolvable probe, they could mutate over time and transform into something quite malevolent. Eventually, our clever little space-faring devices could come back to haunt us by ripping our Solar System to shreds, or by sucking up resources and pushing valuable life out of existence.
- ^
There is an important question about offence-defence balance here. If von Neumann probes are already spread throughout a galaxy and an alien civilisation emerges, the emerging civilisations can be systematically destroyed before they become a threat.
However, if star systems are defence dominant, then it may not be a threat if one star system among a galactic civilisation releases a von Neumann probe. I am a minority viewpoint among people I've talked to about this in believing that cosmic warfare would be offence dominant. I cannot conceive of any defence against a self-replicating probe that is able to observe your defences and re-design itself accordingly with arbitrarily advanced weapons like interstellar lasers. I played this game as a kid, there is always a more powerful weapon to overcome your defence. In the plateau of technological innovation and resource availability, offence will always win. Change my mind.
- ^
A galactic civilisation would have a huge biological diversity (or even digitally sentient beings). No single pandemic would be able to infect all of them unless a superintelligent being was repeatedly designing them. In which case, superintelligence is the x-risk, and also it would have more efficient ways to kill everyone than generating plagues.
- ^
I'm definitely leaning into superintelligence for these last two points on nanotech and biological machines. Don't worry too much about the categories - the biggest galactic existential risk is a combination of multiple items on the list.
- ^
Bostrom, Nick. "Existential risks: Analyzing human extinction scenarios and related hazards." Journal of Evolution and technology 9 (2002).
- ^
SU(5) is disproven because observational evidence indicates that the proton's half life is way longer than the theory predicts. Proton decay was a key prediction of the theory.
- ^
Other particles like photons could also decay into lighter unknown particles in speculative scenarios like Lorentz-violating theories or hidden sectors.
- ^
The existence of proton decay and magnetic monopoles are some of the key predictions of many of the grand unifying theories in the 1970s. The vector boson or the Higgs mesons would have to be extremely massive for the proton to decay, which they aren't, and they can't be changed because they're fundamental parameters. However, there's a theoretical particle called a magnetic monopole that is stable independent of those values. So passing a magnetic monopole through matter allows the extremely massive energy conditions to be sidestepped, and the proton could decay. Recent developments have even argued that this is possible without new physics, just a deeper understanding of existing symmetries. However, as this process requires a monopole to interact with matter directly, it's not going to spread out and destroy all matter in the universe unless the reaction was self-propagating, i.e., the interaction created new monopoles.
- ^
Emerging from a theory of quantum gravity, which reveals new routes to induce proton decay, similarly only acting locally.
- ^
Free neutrons (i.e., neutrons not bound within an atomic nucleus) are already known to decay into protons (and an electron and an antineutrino).
- ^
This emerges from brane theory, where multiple other dimensions exist within our universe and can interact with our own.
- ^
e.g. a magnetic monopole interacting with a proton and causing it to decay produces another magnetic monopole.
- ^
e.g. decay of subatomic particles might follow vacuum decay
- ^
That does not represent a consensus view among physicists.
- ^
The gravitational constant is a fundamental physical constant that quantifies the strength of the gravitational force between objects. It appears in Newton's Law of Universal Gravitation and Einstein's theory of General Relativity. Some theories like Scalar–Tensor Theories, String Theory, Braneworld Models allow the gravitational constant to change.
- ^
- ^
Planck's constant is a fundamental parameter in quantum mechanics: "a photon's energy is equal to its frequency multiplied by the Planck constant, and the wavelength of a matter wave equals the Planck constant divided by the associated particle momentum". A slight increase could lead to atoms being much larger than they are now, potentially affecting the stability of matter and the size of celestial objects.
- ^
This has huge implications for end-of-universe scenarios as it is closely associated with dark energy. If the total mass-energy density of the universe is high enough (i.e., if dark energy weakens), the expansion could eventually slow down and stop. If gravity overcomes dark energy, the universe would begin to contract. Eventually, the universe would shrink into a singularity - a state of infinite density and temperature, similar to the conditions at the moment of the Big Bang.
- ^
This constant governs the strength of electromagnetic interactions. Life as we know it might not be possible under even modest changes in the fine structure constant, as protein folding, DNA stability, and biochemical reactions depend on electromagnetic interactions.
- ^
Color confinement confines quarks inside protons and neutrons. If this broke down, quarks could roam free or hadrons might disintegrate.
- ^
In some Grand Unifying Theories, quarks and leptons are two sides of the same field. This could allow for quark-to-lepton transitions (e.g. a neutron turning into an antineutrino). Not good.
- ^
Referring to the simulation theory and black hole universe theory (explained below)
- ^
Long & accurate explanation here.
- ^
Kurzgesagt is the elite source on this:
https://sites.google.com/view/sources-black-hole-universe/
https://www.youtube.com/watch?v=71eUes30gwc&t=3s&ab_channel=Kurzgesagt%E2%80%93InaNutshell
- ^
Robin Hanson explained this in his blog: https://www.overcomingbias.com/p/beware-general-visible-near-preyhtml
"So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them."
- ^
Maybe some kind of thing has ultimate moral value and is optimised for across the galaxy e.g. a simulation of digital sentience in bliss. This means that the society no longer has heterogeneity that would protect it against different types of threats. So the society may be very weak to one particular thing, like an advanced computer virus-type threat or an evil superintelligence.
- ^
I have a lot of uncertainty about this. I'd love to hear another take.. it's stuff like vacuum decay that concern me. Vacuum decay is utterly impossible to defend against, unless you could separate your star system from the rest of the universe or move into another dimension. Similar thing with strange matter - sure you might be able to produce gravitational waves to protect against them, but then you might have to deal with strange matter AND incoming gravitational waves. With effectively infinite resources and arbitrarily advanced technology, offence always wins in the end.
I'm particularly unsure about whether I'm right about this statement: Even if a superintelligence exists in your solar system, the creation of a superintelligence optimised for total destruction somewhere else in the galaxy will inevitably lead to your destruction.
- ^
How bad this would be for the long term potential of sentience is very dependent on the aliens that destroy us. It’s possible that the aliens might be intelligent but not sentient, so if they replaced us that would be a disaster. The aliens could also be tyrannous, or have little respect for other sentient beings, and thus their conquering of our galaxy represents an s-risk. Much ambiguity.
- ^
Yeah this took me more than just one afternoon. I really missed the mark on this "exhaustive list of cosmic threats" in 2023, which lists a pathetic 19 threats.
- ^
The data for the bar graph can be accessed in this spreadsheet: https://docs.google.com/spreadsheets/d/1DoSyDlwsuH2GXd_xI4kCxfKlnaNOtk1WvemZdQhVsAA/edit?usp=sharing
Email jordan.stone@spacegeneration.org if you can't access itFYI there are like a million caveats and conditions with each thing on the list, so if its confusing I'm very communicative over email or on the forum and I'm happy to answer them if you're confused or interested.
- ^
If someone did this in 1980, things would look a lot more concerning!
- ^
There is so much uncertainty here that these can only really be guesses. I'm happy to update them if other points or new research comes to light. Maybe I'll re-write the post in 2040.
- ^
All speculative physics stuff is set at a 10% probability of being real as a geometric mean to reflect my uncertainty. If I were being conservative, the probability would be more like 1%.
- ^
- ^
Potentially avoided by wormholes or quantum coupling
- ^
Again, assuming that a superintelligence has not accompanied the expansion of human civilization to prevent any galactic x-risks from occurring.
- ^
i.e., neartermist vs longtermist.
- ^
Some good arguments for why this might take a really long time. We need to send probes first:
We wouldn't send humans (the question specifies humans):
- ^
Without a strong presence in the real world, this would leave the civilisation vulnerable to the emergence of alien life elsewhere in the universe though, which is probably an inevitability given enough time.
- ^
Once we have this governance system, we probably need to spread throughout the affectable universe as fast as possible to prevent independent alien civilisations from emerging, or gain the ability to collaborate with them on preventing galactic x-risks.
- ^
Up until we become a type II civilisation maybe, then I think things become very offence dominant.
- ^
The seminal work on the subject is this paper: Armstrong, Stuart, and Anders Sandberg. "Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox." Acta Astronautica 89 (2013): 1-13.
- ^
Thanks to Alexey Turchin for pointing me towards this potential risk.
- ^
Wikipedia: "Mars and Earth during the Cryogenian period may have experienced the opposite of a runaway greenhouse effect: a runaway refrigerator effect. Through this effect, a runaway feedback process may have removed much carbon dioxide and water vapor from the atmosphere and cooled the planet. Water condenses on the surface, leading to carbon dioxide dissolving and chemically binding to minerals. This reduced the greenhouse effect, lowering the temperature and causing more water to condense. The result was lower temperatures, with water being frozen as subsurface permafrost, leaving only a thin atmosphere.[38][39] In addition, ice and snow are far more reflective than open water, with an albedo of 50-70% and 85% respectively. This means that as a planet's temperature decreases and more of its water freezes, its ability to absorb light is reduced, which in turn makes it even colder, creating a positive feedback loop.[40] This effect, combined with the decrease in heat-retaining clouds and vapor, becomes runaway once snow and ice coverage reach a certain threshold (within 30 degrees of the equator), plunging the planet into a stable snowball state.[41][42]"
- ^
1 in 2.8 quadrillion chance of a binary black hole approaching the Solar System in the next 100 years. The internet is a big place: https://physics.stackexchange.com/questions/464372/what-is-the-closest-distance-for-an-undiscovered-black-hole#:~:text=It%20is%20also%20absurdly%20unlikely,parsecs%2C%20or%2016.3%20light%20years.
Toby_Ord @ 2025-06-18T13:44 (+28)
This is a very interesting post. Here's how it fits into my thinking about existential risk and time and space.
We already know about several related risk effects over space and time:
- If different locations in space can serve as backups, such that humanity fails only if all of them fail simultaneously, then the number of these only needs to grow logarithmically before there is a non-zero chance of indefinite survival.
- However, this does not solve existential risk, as it only helps with uncorrelated risks such as asteroid impacts. Some risks are correlated between all locations in a planetary system or all stars in a galaxy (often because an event in one causes the downfall of all others) and having multiple settlements doesn't help with those.
- Also, to reach indefinite survival, we need to reduce per-century existential risk by some constant fraction each century (quite possibly requiring a deliberate and permanent prioritisation of this by humanity)
You pointed out a fourth related issue. When it comes to the correlated risks, settling more and more star systems doesn't just not help with these — it creates more and more opportunities for these to happen. If there were 100 billion settled systems then at least for the risks that can’t be defended against (such as vacuum collapse) a galactic-scale civilisation would undergo 100 billion centuries worth of this risk per century. So as well as an existential-risk-reducing effect for space settlement, there is also a systematic risk-increasing effect. (And this is a more robust and analysable argument than those about space wars.)
I’ve long felt that humanity would want to bind all its settlements to a common constitution, ruling out certain things such as hostile actions towards each other, preparing advanced weaponry that could be used for this purpose, or attempts to seize new territory in inappropriate ways. That might help sufficiently for some of the coordination problems, but I hadn’t noticed that even if each location is coordinated and aligned, if we settle 100 billion worlds, a certain part of the accident risk gets multiplied by more than a billion-fold, and this creates a fundamental tension between the benefits of settling more places and the risks of doing so. I feel like getting per-century risk down to 1% in a few centuries might not be that hard, but if we need to get it down to 0.00000000001%, it is less clear that is possible (though at least it is only the *objective probability* that needs to get so low — it’s OK if your confidence you are right isn’t as strong as 99.999999999%).
One style of answer is to require that almost no settlements are capable of these galaxy-wide existential risks. e.g.
- they have no people, but contribute to our goals in some other way (such as pure happiness or robotic energy-harvesting for some later project)
- or they have people flourishing in relatively low-tech utopian states
- or they have people flourishing inside virtual worlds maintained by machines, where the people have no way of affecting the outside world
- or we find all the possible correlated risks in advance and build defense-in-depth guardrails around all of them.
Alternatively, if each settlement is capable of imposing such risks, then you could think of each star-system-century as playing the same role as a century in my earlier model. i.e. instead of needing to exponentially decrease per-period risk over time, we need to exponentially decrease it per additional star system as well. But this is a huge challenge if we are thinking of adding billions of places within a small number of centuries. Alternatively, one could think of it as requiring that we divide the acceptable level of per-period risk by the number of settlements. In the worst case, one might not be able to gain any EV by settling other star systems relative to just staying on one, as the risk-downsides outweigh the benefits. (But the kinds of limited settlements listed above should still be possible.)
There is an interesting question about whether raw population has the same effect as extra settled star systems. I’m inclined to think it doesn’t, via a model whereby well-governed star systems aren’t just as weak as their most irresponsible or unlucky citizen, even if the galaxy is as weak as its most irresponsible or unlucky star system. e.g. that doubling the population of a star system doesn’t double the chance it triggers a vacuum collapse, but doubling the number of independently governed star systems might (as a single system might decide not to prioritise risk avoidance).
Toby_Ord @ 2025-06-18T13:48 (+15)
Here is a nice simple model of the trade-off between redundancy and correlated risk. Assume that each time period, each planet has an independent and constant chance of destroying civilisation on its own planet and an independent and constant chance of destroying civilisation on all planets. Furthermore, assume that unless all planets fail in the same time period, they can be restored from those that survive.
e.g. assume the planetary destruction rate is 10% per century and the galaxy destruction rate is 1 in 1 million per century. Then with one planet the existential risk for human civilisation is ~10% per century. With two planets it is about 1% per century, and reaches a minimum at about 6 planets, where there is only a 1 in a million chance you lose all planets simultaneously from planetary risk, but now ~6 in a million chance of one of them destroying everything. In this case, beyond 6 planets, the total risk starts rising as the amount of redundancy they add is smaller than the amount of new risk they create, and by the time you have 1 million planets, the existential risk rate for human civilisation per century is about 63%.
This is an overly simple model and I've used arbitrary parameters, but it shows it is quite easy for risk to first reduce and then increase as more planets are settled, with a risk-optimal level of settlement in between.
Toby Tremlett🔹 @ 2025-06-26T11:25 (+19)
I think this post is underrated (karma-wise) and I'm curating it. I love how thorough this is, and the focus on under-theorised problems. I don't think there are many other places like this where we could have a serious conversation about these risks.
I'd like to see more critical engagement on the key takeaways (helpfully listed at the end). As a start, here's a poll for the key claim Jordan identifies in the title:
Lukas Finnveden @ 2025-06-29T04:56 (+7)
I disagree that it will probably doom the long-term future.
This is partly because I'm pretty optimistic that, if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that. (E.g. having AI monitors travel with people and force them not to do stuff, as Buck mentions in the comments.) Importantly, I think interstellar colonization is difficult/slow enough that we'll probably first get very smart AIs with plenty of time to figure out good solutions. (If we solve alignment.)
But I also think it's less likely that things would go badly even without coordination. Going through the items in the list:
Galactic x-risk Is it possible? Would it end Galactic civ? Lukas' take Self-replicating machines 100% | ✅ 75% | ❌ I doubt this would end galactic civ. The quote in that section is about killing low-tech civs before they've gotten high-tech. A high-tech civ could probably monitor for and destroy offensive tech built by self-replicators before it got bad enough that it could destroy the civ. Strange matter 20%[64] | ❌ 80% | ❌ I don't know much about this. Vacuum decay 50%[65] | ❌ 100% | ✅ "50%" in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it's possible to induce. This might still be a big deal though! Subatomic particle decay 10%[64] | ❌ 100% |✅ I don't know much about this. Time travel 10%[64] | ❌ 50% | ❌ I don't know much about this, but intuitively 50% seems high. Fundamental Physics Alterations 10%[64] | ❌ 100% | ✅ I don't know much about this. Interactions with other universes 10%[64] | ❌ 100% | ✅ I don't know much about this. Societal collapse or loss of value 10% | ❌ 100% | ✅ This seems like an incredibly broad category. I'm quite concerned about something in this general vicinity, but it doesn't seem to share the property of the other things in the list where "if it's started anywhere, then it spreads and destroys everything everywhere". Or at least you'd have to narrow the category a lot before you got there. Artificial superintelligence 100% | ✅ 80% | ❌ The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it's defense-dominant. Conflict with alien intelligence 75% | ❌ 90% | ❌ The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it's defense-dominant.
Expanding on the question about whether space warfare is offense-dominant or defense-dominant: One argument I've heard for defense-dominance is that, in order to destroy very distant stuff, you need to concentrate a lot of energy into a very tiny amount of space. (E.g. very narrowly focused lasers, or fast-moving rocks flinged precisely.) But then you can defeat that by jiggling around the stuff that you want to protect in unpredictable ways, so that people can't aim their highly-concentrated energy from far away and have it hit correctly.
Now that's just one argument, so I'm not very confident. But I'm at <50% on offense-dominance.
(A lot of the other items on the list could also be stories for how you get offense-dominace, where I'm especially concerned about vacuum decay. But it would be double-counting to put those both in their own categories and to count them as valid attacks from superintelligence/aliens.)
Buck @ 2025-06-27T15:13 (+6)
Interstellar travel will probably doom the long-term future
Seems false, probably people will just sort out some strategy for enforcing laws (e.g. having AI monitors travel with people and force them not to do stuff).
Jelle Donders @ 2025-06-28T15:41 (+5)
Interstellar travel will probably doom the long-term future
Some quick thoughts: By the time we've colonized numerous planets and cumulative galactic x-risks are starting to seriously add up, I expect there to be von Neumann probes traveling at a significant fraction of the speed of light (c) in many directions. Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero. In such a scenario most value of our future lightcone could still be extinguished, but not all.
A very long-term consideration is that as the expansion of the universe accelerates so does the number of causally isolated islands. For example, in 100-150 billion years the Local Group will be causally isolated from the rest of the universe, protecting it from galactic x-risks happening elsewhere.
I guess this trades off with your 6th conclusion (Interstellar travel should be banned until galactic x-risks and galactic governance are solved). Getting governance right before we can build von Neumann probes at >0.5c is obviously great, but once we can build them it's a lot less clear if waiting is good or bad.
Thinking out loud, if any of this seems off lmk!
Dan_Keys @ 2025-06-29T01:18 (+3)
Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero.
This isn't right. Near-speed-of-light movement in opposite directions doesn't add up to above speed of light relative movement. e.g., Two probes each moving away from a common starting point at 0.7c have a speed relative to each other of about 0.94c, not 1.4c, so they stay in each other's lightcone.
(That's standard special relativity. I asked o3 how that changes with cosmic expansion and it claims that, given our current understanding of cosmic expansion, they will leave each other's lightcone after about 20 billion years.)
JordanStone @ 2025-06-28T22:03 (+2)
Awesome speculations. We're faced with such huge uncertainty and huge stakes. I can try and make a conclusion based on scenarios and probabilities, but I think the simplest argument for not spreading throughout the universe is that we have no idea what we're doing.
This might even apply to spreading throughout the Solar System too. If I'm recalling correctly, Daniel Deudney argued that a self-sustaining colony on Mars is the point of no return for space expansion as it would culturally diverge from Earth and their actions would be out of our control.
JordanStone @ 2025-06-26T20:50 (+3)
Interstellar travel will probably doom the long-term future
By "probably" in the title, apparently I mean just over 50% chance ;)
Sharmake @ 2025-06-28T16:19 (+2)
Interstellar travel will probably doom the long-term future
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don't actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
The change the universe constants is an example.
Also, in most modern theories of time travel, you only get self-consistent outcomes, and a lot of the classic portrayals of using time travel to destroy the universe through paradoxical inputs wouldn't work, because only self-consistent outcomes are allowed, and would almost certainly be prevented beforehand.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
For those unaware of acausal trade, it's basically replacing direct communication for predicting what each other wants, and if you have the ability to do vast amounts of simulations, you can get very, very good predictive models of what the other wants such that both of you can trade without requiring any communication, which is necessary for realistic galactic empires/singletons to exist:
https://www.lesswrong.com/w/acausal-trade
I don't have much of an opinion on the question, but if it's true that acausal trade can basically substitute wholly for communication that is traditionally necessary to suppress rebellions in empires, then most galactic/universe scale risks are pretty easily avoidable because we don't have to roll the dice on every civilization trying to do it's own research that may lead to x-risk.
JordanStone @ 2025-06-28T22:34 (+1)
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don't actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I'm not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can't imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there's also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
Sharmake @ 2025-06-29T03:11 (+2)
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn't involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I'm not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can't imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there's also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I agree that if there's an X-risk that isn't defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it's much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there's very little time for rogue spacefarers to get outside of the parent civilization's control.
The dissolving the fermi paradox paper is here:
ben.smith @ 2025-06-29T13:54 (+1)
Interstellar travel will probably doom the long-term future
My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.
Dan_Keys @ 2025-06-29T01:24 (+1)
I'm unsure how to interpret "will probably doom". 2 possible readings:
- A highly technologically advanced civilization that tries to get really big will probably wind up wiping itself out due to the dynamics in this post. More than half of all highly technologically advanced civilizations that grow really big go extinct due to drastically increasing their attack surface to existential threats.
- The following claim is probably true: a highly technologically advanced civilization that tries to get really big will almost certainly wind up wiping itself out due to the dynamics in this post. Almost every very large, highly technologically advanced civilization that grows big has a doom-level event spawn in a pocket of the civilization and spread to the rest of it.
The 2nd reading is big if true - it implies that the EV of the future arising from our civilization is much lower than it would otherwise seem, and that civilizations might be better off staying small - but I disagree with it.
For the first one I'm on the agree side, but it doesn't change the overall story very much.
Ben_West🔸 @ 2025-06-29T03:44 (+12)
This is one of the most interesting "Cause X" posts I've read in a while, thanks for writing it!
JordanStone @ 2025-06-29T04:07 (+1)
Thank you :)
OscarD🔸 @ 2025-06-19T14:40 (+8)
Very interesting, and props in particular for assembling the cosmic threats dataset - that does seem like a lot of work!
I tend to agree with you and Joseph that there isn't anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
Sharmake @ 2025-06-28T16:25 (+2)
On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.
I think liberalism is unfortunately on a timer that will almost certainly expire pretty soon, no matter what we do.
We either technologically regress due to the human population falling and more anti-democratic civilizations winning outright due to the zero/negative sum games being played, or we create AIs that replace us and due to the incentives plus the sheer difference in power, that AIs by default create something closer to a dictatorship for humans, and in particular value alignment is absolutely critical in the long run for AIs that can take every human job.
Modern civilization is not stable at all.
Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.
Yeah, assuming no FTL, acausal trade/cooperation is necessary if you want anything like a unified galactic/universal polity.
JordanStone @ 2025-06-20T18:10 (+1)
Thanks Oscar :)
Yeah there are so many horrible trade-offs to figure out around long-term resilience and liberty/diversity. I'm hopeful that these are solvable with a long reflection (and superintelligence!).
David Mathers🔸 @ 2025-06-26T13:50 (+5)
This is very bad news for longtermism if correct, since it suggests that value in the far future gained by preventing extinction now is much lower than it would otherwise be.
MichaelStJules @ 2025-06-26T22:03 (+4)
I haven't read much of this post, so just call me out if this is totally off base, but I suspect you're treating events as more "independent" than you should.
Relevant: A nuclear war forecast is not a coin flip by David Johnston.
I also illustrated in a comment there:
On the other extreme, we could imagine repeatedly flipping a coin with only heads on it, or a coin with only tails on it, but we don't know which, but we think it's probably the one only with heads. Of course, this goes too far, since only one coin flip outcome is enough to find out what coin we were flipping. Instead, we could imagine two coins, one with only heads (or extremely biased towards heads), and the other a fair coin, and we lose if we get tails. The more heads we get, the more confident we should be that we have the heads-only coin.
To translate this into risks, we don't know what kind of world we live in and how vulnerable it is to a given risk, and the probability that the world is vulnerable to the given risk at all an upper bound for the probability of catastrophe. As you suggest, the more time goes on without catastrophe, the more confident we should be that we aren't so vulnerable.
JordanStone @ 2025-06-28T22:51 (+1)
Definitely on base :D
I think a galactic civilisation needs to have absolute existential security, or a galactic x-risk will inevitably occur (i.e., they need a coin that always lands on heads). If your galactic civilisation has survived for longer than you would have expected it to based on cumulative chances, then you can be very confident you've achieved absolute existential security (you have that coin). But a galactic civ would have to know whether they have the coin that is always heads, or the coin that is heads 99.9999999% of the time. I'm not sure how that's possible.
ben.smith @ 2025-06-29T14:00 (+3)
Can you say something about what N-D lasers are and why they present such a strong threat? A google search for "N-D laser" just turns up neodymium lasers and it isn't clear why they would be as threatening as you present. In the worst case, you build a probe with a very powerful fusion energy source which is able to fire a laser at people sufficiently powerful to kill them, you could probably also build a laser or defense system to strike and kill the probe before existential loss has been caused.
Joseph_Chu @ 2025-06-18T19:39 (+3)
The universe is already 13.8 billion years old. Assuming that our world is roughly representative for how long it takes for a civilization to spring up from a planet being formed (4.5 billion years), there has been about 9 billion years during which other more advanced civilizations could develop. Assuming it takes something like 100 million years to colonize an entire galaxy, one would already expect to see aliens having colonized the Milky Way, or initiated at least one of the existential risks that you describe. The fact that we are still here, is anthropic reasoning for either being alone in the galaxy somehow, that the existential risks are overblown, or, more likely, that there is already some kind of benign aliens in our neighbourhood who for whatever reason are leaving Earth alone (to our knowledge anyway), and probably protecting the galaxy from those existential risks.
(Note: I'm aware of the Grabby Aliens theory, but I still think it's quite probable that even if we are early, we are much less likely to be the very first civilization out there.)
Keep in mind, the most advanced aliens are likely BILLIONS of years ahead of us in development. They're likely unfathomably powerful. If we exist and they exist, they're probably also wise and benevolent in ways we don't understand (or else we wouldn't be here living what seem like net positive lives). Maybe there exist strong game theoretic proofs that we don't yet know for cooperation and benevolence that ensure that any rational civilization or superintelligence will have strong reasons to agree to cooperate at a distance and not initiate galaxy killing existential risks. Maybe those big voids between galaxies are where not so benign civilizations sprouted and galaxy killing existential risks occurred.
Though, it could also be that time travellers / simulators / some other sci-fi-ish entities "govern" the galaxy. Like, perhaps humans are the first civilization to develop time travel and so use their temporal supremacy to ensure the galaxy is ripe for human civilization alone, which could explain the Fermi Paradox?
All this, is of course, wild speculation. These kinds of conjectures are very hard to ground in anything else.
Anyways, I also found your post very interesting, but I'm not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?
JordanStone @ 2025-06-18T20:48 (+3)
Hi Joseph, glad you found the post interesting :)
Yeah, for "the way forward" section I explicitly assume that alien civilisations have not already developed. This might be wrong, I don't know. One possible argument in line with my reasoning around galactic x-risks is that aliens don't exist because of the Anthropic principle - if they had already emerged then we would have been killed a long time ago, so if we exist then it's impossible for aliens civilisations to have emerged already. No alien civilisations exist for the same reason that the fine structure constant allows biochemistry.
I'm not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?
I totally agree with this statement. I have huge uncertainty about what awaits us in the long-term future (in the post I compared myself to an Ancient Roman trying to predict AI alignment risks). But it seems that, since the universe is currently conducive to life, the unknowns may be more likely to end us than help us. So the main practical suggestion I have is that we take things one step at a time and hold off on interstellar travel (which could plausibly occur in the next few decades) until we know more about galactic x-risks and galactic governance.
It's not necessarily that these galactic-scale considerations will happen soon or are tractable, but that we might begin a series of events (i.e., the creation of self-propagating spacefaring civilisation) that interferes with the best possible strategy for avoiding them in the long-term future. I don't claim to know what that solution is, but I suggest some requirements a governance system may have to meet.
Joseph_Chu @ 2025-06-24T15:29 (+1)
I'm not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist (i.e. grabby aliens), but I think there's room in the universe for many possible kinds of alien civilizations, and so if we allow that some but not all aliens are hostile expansionists, then there might be pockets of the universe where an advanced alien civilization quietly stewards their region. You could call them the "Gardeners". It's possible that even if we can't exist in a region with Grabby Aliens, we could still either exist in an empty region with no aliens, or a region with Gardeners.
Also, realistically, if you assume that the reach of an alien civilization spreads at the speed of light, but the effective expansion rate is much slower due to not needing the space until it's already filled up with population and megastructures, it's very possible that we might be within the reach of advanced aliens who just haven't expanded that far yet. Naturally occurring life might be rare enough that they might see value in not destroying or colonizing such planets, say, seeing us as a scientifically valuable natural experiment, like the Galapagos were to Darwin.
So, I think there's reasons why advanced aliens aren't necessarily mutually exclusive with our survival, as the Anthropic Principle would require.
Given, I don't know which of empty space or Gardeners or late expanders is more likely, and would hesitate to assign probabilities to them.
JordanStone @ 2025-06-25T13:18 (+1)
I'm not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist
I'd be interested to hear why you think this. I think that based on the reasoning in my post, all it takes is one alien civilization to emerge that would initiate a galactic x-risk, maybe because they accidentally create astronomical suffering and want to end it, they are hostile, would prefer different physics for some reason, or are just irresponsible.
Joseph_Chu @ 2025-06-25T13:47 (+3)
Most of the galactic x-risks should be limited by the speed of light (because causality is limited by the speed of light), and would, if initiated, probably expand like a bubble from their source, again, propagating outward at the speed of light. Thus, assuming a reasonably random distribution of alien civilizations, there should regions of the universe that are currently unaffected by that one or more alien civilizations causing a galactic x-risk to occur. We are most probably in such a region, otherwise we would not exist. So, yes, the Anthropic Principle applies in the sense that we eliminate a possibility (x-risk causing aliens nearby), but we don't eliminate all the other possibilities (alone in the region or non-x-risk causing aliens nearby), which is what I mean. I should have explained that better.
Also, the reality is that our long-term future is limited by the eventual heat death of the universe anyway (we will eventually run out of usable energy), so there is no way for our civilization to last forever (short of some hypothetical time travel shenanigans). We can at best delay the inevitable, and maximize the flourishing that occurs over spacetime.
JordanStone @ 2025-06-20T18:10 (+2)
Edit: changed the title from "Galactic x-risks: Obstacles to accessing the cosmic Endowment" to "Interstellar travel will probably doom the long-term future". I wanted to highlight the main practical suggestion from the post :)
Beyond Singularity @ 2025-06-26T15:26 (+1)
Thanks for this systematic exploration of galaxy-scale existential risks!
In my recent post, "Beyond Short-Termism: How δ and w Can Realign AI with Our Values," I propose turning exactly these challenging long-term, large-scale ethical tradeoffs into two clear and manageable parameters:
- δ — how far into the future our decisions explicitly care about.
- w — how broadly our moral concern extends (e.g., future human colonies, alien life, sentient AI).
From this viewpoint, your argument becomes even clearer: interstellar expansion rapidly increases the number of independent actors (large w), which raises cumulative existential risks beyond our ability to manage uncertainties (δ × certainty).
Within the δ and w framework, there are concrete governance "knobs" to address exactly the risks you highlight:
- Limit the planning horizon (T_max) via constitutional rules.
- Only permit expansion once risk-estimates (certainty thresholds) are high enough.
- Explicitly ensure that long-term moral value (δ × w) is demonstrably positive before taking action.
I'd be very curious if you (or other readers) find that framing the dilemma explicitly in terms of these two parameters makes it easier to prioritize governance and research, or if you see gaps I'm overlooking.
Here's the link if you'd like to explore further—examples and details inside.
Ronen Bar @ 2025-06-29T06:34 (+1)
I agree when thinking about the long future we have to have better more robust moral frameworks and the time and moral circle expansion are very crucial factors that have to be taken into consideration