JordanStone's Quick takes

By JordanStone @ 2023-09-19T12:50 (+1)

null
JordanStone @ 2025-06-02T10:20 (+16)

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a governance structure that would prevent any person in a whole galactic civilisation from creating digital sentience capable of suffering? (sounds really hard especially given the huge distances and potential time delays in messaging… no idea)
  3. What is the point of no-return where a domino is knocked over that inevitably leads to self-perpetuating human expansion and the creation of galactic civilisation? (somewhere around a self-sustaining civilisation on Mars I think). 

If the answer to question 3 is "Mars colony", then it's possible that creating a colony on Mars is a huge s-risk if we don't first answer question 2. 

Would appreciate some thoughts. 

 

Stuart Armstrong and Anders Sandberg’s article on expanding throughout the galaxy rapidly, and Charlie Stross’ blog post about griefers influenced this quick take.

Birk Källberg 🔸 @ 2025-06-28T10:16 (+4)

Interesting ideas! I've read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts:

  1. Probability of s-risk I agree that in a sufficiently large space civilization (that isn't controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds). Let's unpack this: Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming). 80k defines an s-risk as "something causing vastly more suffering than has existed on Earth so far". This could easily be "achieved" even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth's history (with only ~ 1 billion (10^9) years of animal life). This would not necessarily mean that the whole galactic civ was morally net bad. A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint. My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you'd have to have insanely good "quality control" in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never gets hurt even once. That seems like a bit too high a standard to have for how good the future should go.

But that nitpick aside, I currently expect that a space future without some kind of governance system you're describing still has a high chance of ending up net bad.

  1. How to create the Governance Structure (GS) Here is my idea how this could look like: A superintelligence (could also be post-human) creates countless identical but independent GS copies of itself that expand through the universe and accompany every settlement mission. Their detailed value system is made virtually unalterable, built to last for trillions of years. This I think, is technically achieveable: strong copy-error and damage protections, not updatable via new evidence, strongly defended against outside manipulation attacks. The GS copies largely act on their own in their respective star system colony but have protocols in place on how to coordinate in a loose manner across star systems and millions of years. I think this could work a bit analogous to an ant colony: Lots of small, selfless agents locally interacting with on another; everyone has the exact same values and probably secure intra-hive communication methods; They could still mount an impressively coordinated galactic response to say a von Neumann probe invasion. I could expand further on this idea if you'd like.

  2. Point of no-return I'm unsure about this. Possible such points: a space race gets going in earnest (with geopolitical realities making a Long Reflection infeasible), the first ASI is created and it does not have the goal of preventing s- and x-risks, the first (self-sustaining) space colony gets political independance, the first interstellar mission (to create a colony) leaves the solar system, a sub-par, real-world implementation of the Governance Structure breaks down somewhere in human-settled space.

My current view is still that the two most impactful things (at the moment) are 1) ensuring that any ASI that gets developed is safe and benevolent, 2) improving how global and space politics is conducted. Any specific "points of no-return" seem to me like very contingent on the exact circumstances at that point. Nevertheless, thinking ahead about what situations might be especially dangerous or crucial, seems like a worthwhile persuit to me.

JordanStone @ 2025-06-29T00:03 (+2)

Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help). 

The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we prepare? But besides that, I'm going to look into more specific "points of no return" as there could be a needle hiding in the noodles somewhere. I feel like this is the kind of area where we could be missing something, e.g. the point of no return is really close, or there could be a tractable way to influence the implementation of that point of no return.

OllieBase @ 2025-06-06T09:20 (+2)

probably near 100% if digital sentience is possible… it only takes one


Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.

JordanStone @ 2025-06-06T11:58 (+3)

Yeah sure, it's like the argument that if you get infinite chimpanzees and put them in front of type writers, then one of them would write Shakespeare. If you have a galactic civilisation, it would be very dispersed and most likely each 'colony' occupying each solar system would govern itself independently. So they could be treated as independent actors sharing the same space, and there might be hundreds of millions of them. In that case, the probability that one of those millions of independent actors creates astronomical suffering becomes extremely high, near 100%. I used digital sentience as an example because its the risk of astronomical suffering that I see as the most terrifying - like IF digital sentience is possible, then the amount of suffering beings that it would be possible to create could conceivably outweigh the value of a galactic civilisation. That 'IF' contains a lot of uncertainty on my part. 

But this also applies to tyrannous governments, how many of those independent civilisations across a galaxy will become tyrannous and cause great suffering to their inhabitants? How many of those civilisations will terraform other planets and start biospheres of suffering beings?

The same logic also applies to x-risks that affect a galactic civilisation:

all it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era (at which point, stars able to power an N-D laser will presumably become rare). (Charlie Stross)

Stopping these things from happening seems really hard. It's like a galactic civilisation needs to be designed right from the beginning to make sure that no future colony does this.

OllieBase @ 2025-06-06T12:12 (+3)

Thanks. In the original quick take, you wrote "thousands of independent and technologically advanced colonies", but here you write "hundreds of millions".

If you think there's a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it's more like 1 in 100, and then thousands (or more) would make it extremely likely.
 

JordanStone @ 2025-06-06T12:37 (+3)

Yeah that's true. 

I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then. 

I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.

JordanStone @ 2023-11-21T17:55 (+8)

I'm thinking about organising a seminar series on space and existential risk. Mostly because it's something I would really like to see. The webinar series would cover a wide range of topics:

I think this would be an online webinar series. Would this be something people would be interested in? 

JordanStone @ 2025-05-09T14:59 (+7)

Hey! I'm requesting some help with "Actions for Impact", it's a notion page with activities people can get involved in that take less than 30 minutes and can contribute to EA cause areas. This includes signing petitions, emailing MPs, voting for effective charities in competitions, responding to 'calls for evidence', or sharing something online. EA UK has the notion page linked on their website: https://www.effectivealtruism.uk/get-involved 

It should serve as a hub to leverage the size of the EA community when it's needed. 

I'm excited about the idea and I thought I'd have enough time to keep it updated and share it with organisations and people, but I really don't. If the idea sounds exciting and you have an hour or two per week spare please DM me, I'd really appreciate a couple of extra hands to get the ball rolling a bit more (especially if you have involvement in EA community building as I don't at all). 

JordanStone @ 2024-01-02T16:09 (+4)

I have written this post introducing space and existential risk and this post on cosmic threats, and I've come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)

(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively. 

(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere. 

(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.

(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community. 

(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective. 

(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful. 

(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value. 

David T @ 2024-01-06T23:45 (+1)

(d) might be interesting to read

The space industry is well-funded and already cares a lot about demonstrating impact (using a broader definition of impact than EA) to justify its funding, so (a)-(c) might be possible with industry support, and to some extent already exists. 

I think the overarching story behind (f) is relatively uncomplicated particularly in the context of ongoing trade between Russia and Ukraine-supporters over oil etc : Roscosmos continued to collaborate with NASA et al on stuff like ISS because agreements remained in place and were too critical to suspend. Russia was never actually part of ESA and I suspect many people would have preferred it if Roscosmos was kicked off projects like ExoMars earlier. Probably helps that the engineers and cosmonauts on both sides are likely a good deal more levelheaded than Dmitry Rogozhin, but I don't think we'll hear what went on behind closed doors for a while...

JordanStone @ 2023-10-14T13:39 (+4)

Greetings! I'm a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I'm enthusiastic about contributing my time to generate visuals that effectively support EA causes. 

Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I'm open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don't hesitate to get in touch! I'm happy to hop on a zoom chat

JordanStone @ 2023-09-28T21:05 (+4)

I am a researcher in the space community and I recently wrote a post introducing the links between outer space and existential risk. I'm thinking about developing this into a sequence of posts on the topic. I plan to cover:

  1. Cosmic threats - what are they, how are they currently managed, and what work is needed in this area. Cosmic threats include asteroid impacts, solar flares, supernovae, gamma-ray bursts, aliens, rogue planets, pulsar beams, and the Kessler Syndrome. I think it would be useful to provide a summary of how cosmic threats are handled, and determine their importance relative to other existential threats.
  2. Lessons learned from the space community. The space community has been very open with data sharing - the utility of this for tackling climate change, nuclear threats, ecological collapse, animal welfare, and global health and development cannot be understated. I may include perspective shifts here, provided by views of Earth from above and the limitless potential that space shows us. 
  3. How to access the space community's expertise, technology, and resources to tackle existential threats. 
  4. The role of the space community in global politics. Space has a big role in preventing great power conflicts and building international institutions and connections. With the space community growing a lot recently, I'd like to provide a briefing on the role of space internationally to help people who are working on policy and war. 

Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven't seen much on space on the forum (apart from on space governance), so it would be something new.

M_Allcock @ 2023-09-29T11:27 (+2)

Hey Jordan, I work in the space sector and I'm also based in London. I am currently working on a Government project assessing the impact of space weather on UK critical national infrastructure. I've written a little on the existential risk of space weather, too, e.g. https://forum.effectivealtruism.org/posts/9gjc4ok4GfwuyRASL/cosmic-rays-could-cause-major-electronic-disruption-and-pose

I'll message you as it would be good to connect!

JordanStone @ 2023-09-30T21:39 (+1)

Hi Matt. Sorry I missed your post and thanks for getting in touch! Your research sounds very interesting, I've messaged you directly :)

JordanStone @ 2026-01-12T17:51 (+3)

Super sceptical probably very highly intractable thought that I haven't done any research on: There seem to be a lot of reasons to think we might be living in a simulation besides just Nick Bostrom's simulation argument, like:

If I was pushed into a corner, I might say the probability we are living in a simulation is like 60%, where most evidence seems to point towards us being in a simulation. However, the doubt comes from the high probability that I'm just thinking about this all wrong - like, of course I can come up with a motivation for a simulation to explain any feature of the universe... it would be hard to find something that doesn't line up with an explanation that the simulators just being interested in that particular thing. But in any case, that's still a really high probability of everyone I love potentially not being sentient or even real (fingers crossed we're all in the simulation together). Also, being in a simulation would change our fundamental assumptions about the universe and life, and it be really weird if that had no impact on moral decision-making. 

But everyone I talk to seems to have a relaxed approach to it, like it's impossible to make any progress on this and that it couldn't possibly be decision-relevant. But really, how many people have worked on figuring it out with a longtermist or EA-mindset? Some reasons it might be decision-relevant:

Some questions I'd ask is: 

Overall, this does sounds nuts to me and it probably shouldn't go further than this quick take, but I do feel like there could be something here, and it's probably worth a bit more attention than I think it has gotten (like 1 person doing a proper research project on it at least). Lots of other stuff sounded crazy but now has significant work and (arguably) great progress, like trying to help people billions of years in the future, working on problems associated with digital sentience, and addressing wild animal welfare. There could be something here and I'd be interested in hearing thoughts (especially a good counterargument to working on this so I don't have to think about it anymore) or learning about past efforts. 

Yarrow Bouchard 🔸 @ 2026-01-15T17:32 (+18)

All the things you mentioned aren’t uniquely evidence for the simulation hypothesis but are equally evidence for a number of other hypotheses, such as the existence of a supernatural, personal God who designed and created the universe. (There are endless variations on this hypothesis, and we could come up endless more.)

The fine-tuning argument is a common argument for the existence of a supernatural, personal God. The appearance of fine-tuning supports this conclusion equally as well it supports the simulation hypothesis.

Some young Earth creationists believe that dinosaur fossils and other evidence of an old Earth were intentionally put there by God to test people’s faith. You might also think that God tests our faith in other ways, or plays tricks, or gets easily bored, and creates the appearance of a long history or a distant future that isn’t really there. (I also think it’s just not true that this is the most interesting point in history.)

Similarly, the book of Genesis says that God created humans in his image. Maybe he didn’t create aliens with high-tech civilizations because he’s only interested in beings with high technology made in his image. 

It might not be God who is doing this, but in fact an evil demon, as Descartes famously discussed in his Meditations around 400 years ago. Or it could be some kind of trickster deity like Loki who is neither fully good or fully evil. There are endless ideas that would slot in equally well to replace the simulation hypothesis.

You might think the simulation hypothesis is preferable because it’s a naturalistic hypothesis and these are supernatural hypotheses. But this is wrong, the simulation hypothesis is a supernatural hypothesis. If there are simulators, the reality they live in is stipulated to have different fundamental laws of nature, such as the laws of physics, than exist in what we perceive to be the universe. For example, in the simulators’ reality, maybe the fundamental relationship between consciousness and physical phenomena such as matter, energy, space, time, and physical forces is such that consciousness can directly, automatically shape physical phenomena to its will. If we observed this happening in our universe, we would describe this as magic or a miracle. 

Whether you call them "simulators" or "God" or an "evil demon" or "Loki", and whether you call it a "simulation" or an "illusion" or a "dream", these are just different surface-level labels for substantially the same idea. If you stipulate laws of nature radically other than the ones we believe we have, what you’re talking about is supernatural. 

If you try to assume that the physics and other laws of nature in the simulators’ reality is the same as in our perceived reality, then the simulation argument runs into a logical self-contradiction, as pointed out by the physicist Sean Carroll. Endlessly nested levels of simulation means computation in the original simulators’ reality will run out. Simulations at the bottom of the nested hierarchy, which don’t have enough computation to run still more simulations inside them, will outnumber higher-level simulations. Since the simulation argument says, as one of its key premises, that in our perceived reality we will be able to create simulations of worlds or universes filled with many digital minds, but the simulation hypothesis implies this is actually impossible, then the simulation argument’s conclusion contradicts one of its premises.

There are other strong reasons to reject the simulation argument. Remember that a key premise is that we ourselves or our descendants will want to make simulations. Really? They’ll want to simulate the Holocaust, malaria, tsunamis, cancer, cluster headaches, car crashes, sudden infant death syndrome, and Guantanamo Bay? Why? On our ethical views today, we would not see this as permissible, but rather the most grievous evil. Why would our descendants feel differently? 

Less strongly, computation is abundant in the universe but still finite. Why spend computation on creating digital minds inside simulations when there is always a trade-off between doing that and creating digital minds in our universe, i.e. the real world? If we or our descendants think marginally and hold as one of our highest goals to maximize the number of future lives with a good quality of life, using huge amounts of computation on simulations might be seen as going against that goal. Plus, there are endlessly more things we could do with our finite resource of computation, most we can’t imagine today. Where would creating simulations fall on the list? 

You can argue that creating simulations would be a small fraction of overall resources. I’m not sure that’s actually true; I haven’t done the math. But just because something is a small fraction of overall resources doesn’t mean it will be likely be done. In an interstellar, transhumanist scenario, our descendants could create a diamond statue of Hatsune Miku the size of the solar system and this would take a tiny percentage of overall resources, but that doesn’t mean it will likely happen. The simulation argument specifically claims that making simulations of early 21st century Earth will interest our descendants more than alternative uses of resources. Why? Maybe they’ll be more interested in a million other things.

Overall, the simulation hypothesis is undisprovable but no more credible than an unlimited number of other undisprovable hypotheses. If something seems nuts, it probably is. Initially, you might not be able to point out the specific logical reasons it’s nuts. But that’s to be expected — the sort of paradoxes and thought experiments that get a lot of attention (that "go viral", so to speak) are the ones that are hard to immediately counterargue.

Philosophy is replete with oddball ideas that are hard to convincingly refute at first blush. The Chinese Room is a prime example. Another random example is the argument that utilitarianism is compatible with slavery. With enough time and attention, refutations may come. I don't think one's inability to immediately articulate the logical counterargument is a sign that an oddball idea is correct. It's just that thinking takes time and, usually, by the time an oddball idea reaches your desk, it's proven to be resistant to immediate refutation. So, trust that intuition that something is nuts. 

Joseph_Chu @ 2026-01-15T19:28 (+7)

Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I've seen in quite a while, which was refreshing for my peace of mind.

That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn't seem that great, but it seems plausible at least?

Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.

titotal @ 2026-01-12T21:04 (+8)

I would not describe the finetuning argument and the Fermi paradox as strong evidence in favour of the simulation hypothesis. I would instead say that they are open questions for which a lot of different explanations have been proposed, with the simulation offering only one of many possible resolutions. 

As to the "importance" argument, we shouldn't count speculative future events as evidence of the importance of now. I would say the mid-20th century was more important than today, because that's the closest we ever got to nuclear annihilation (plus like, WW2). 

Joseph_Chu @ 2026-01-12T18:28 (+1)

I've thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While it's possible that you or I are alone in the simulation, we can't, realistically, know this. We can't know with much certainty that the apparently sentient beings who share our world aren't actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming you're a Utilitarian), or have rights/autonomy to be respected, etc.

We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for people's minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, they'll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.

Given all this, I don't see the point in trying to defy them or doing really anything differently than what you'd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.

If we're alone in the sim, then it doesn't matter what we do anyway, so I focus on the possibility that we aren't alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.

At least, that's the way I see things right now. Your mileage may vary.

JordanStone @ 2024-04-08T08:31 (+3)

https://forum.effectivealtruism.org/events/cJnwCKtkNs6hc2MRp/panel-discussion-how-can-the-space-sector-overcome 

This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles. 

JordanStone @ 2023-10-06T08:02 (+2)

I searched google for "gain of function UK" and the first hit was a petition to ban gain of function research in the UK that only got 106 signatures out of the 10,000 required. 

https://petition.parliament.uk/petitions/576773#:~:text=Closed%20petition%20Ban%20%E2%80%9CGain%20of,the%20consequences%20could%20be%20severe.

How did this happen? Should we try again?