War in space, whether civilizations age, and the best things possible in our universe (Anders Sandberg on the 80,000 Hours Podcast)

By 80000_Hours @ 2023-10-09T14:03 (+10)

We just published an interview: Anders Sandberg on war in space, whether civilizations age, and the best things possible in our universe. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future?

Right now, if somebody’s sitting on Mars and you’re going to war against them, it’s very hard to hit them. You don’t have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it’s going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it’s actually very hard to hit you.

So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you’re in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast.

So my general conclusion has been that war looks unlikely on some size scales but not on others.

- Anders Sandberg

In today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.

They cover:

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Highlights

Potential amazing futures

Anders Sandberg: One amazing future is humanity gets its act together. It solves existential risk, develops molecular nanotechnology and atomically precise manufacturing, masters biotechnology, and turns itself sustainable: turns half of the planet into a wilderness preserve that can evolve on its own, keeping to the other half where you have high material standards in a totally sustainable way that can keep on going essentially as long as the biosphere is going. And long before that, of course, people starting to take steps to maintain the biosphere by putting up a solar shield, et cetera. And others, of course, go off — first settling the solar system, then other solar systems, then other galaxies — building this super-civilisation in the nearby part of the universe that can keep together against the expansion of the universe, while others go off to really far corners so you can be totally safe that intelligence and consciousness remains somewhere, and they might even try different social experiments.

But you could imagine another future: In the near future, we develop ways of doing brain emulation and we turn ourselves into a software species. Maybe not everybody; there are going to be stragglers who are going to maintain the biosphere on the Earth and going to be frowning at those crazies that in some sense committed suicide by becoming software. The software people are, of course, just going to be smiling at them, but thinking, “We’ve got the good deal. We got on this infinite space we can define endlessly.”

And quite soon they realise they need more compute, so they turn a few other planets of the solar system into computing centres. But much of a cultural development happens in the virtual space, and if that doesn’t need to expand too much, you might actually end up with a very small and portable humanity. I did a calculation some years ago that if you actually covered a part of the Sahara Desert with solar panels and use quantum dot cellular automaton computing, you could keep mankind in an uploaded form running there indefinitely, with a rather minimal impact on the biosphere. So in that case, maybe the future of humanity is instead going to be a little black square on a continent, and not making much fuss in the outside universe.

The thing that interests me is that I like open-ended futures. I think it’s kind of worrisome if you come up with an idea of a future that is so perfected, but it requires that everybody do the same thing. That is pretty unlikely, given how we are organised as people right now, and systems that force us to do the same thing are terrifyingly dangerous. It might be a useful thing to have a singleton system that somehow keeps us from committing existential risk suicide, but if that impairs our autonomy, we might actually have lost quite a lot of value. It might still be worth it, but you need to think carefully about the tradeoff. And if its values are bad, even if it’s just subtly bad, that might mean that we lose most of the future.

I also think that there might be really weird futures that we can’t think well about. Right now we have certain things that we value and evaluate as important and good: we think about the good life, we think about pleasure, we think about justice. We have a whole set of things that are very dependent on our kind of brains. Those brains didn’t exist a few million years ago. You could make an argument that some higher apes actually have a bit of a primitive sense of justice. They get very annoyed when there is unfair treatment. But as you go back in time, you find simpler and simpler organisms and there is less and less of these moral values. There might still be pleasure and pain. So it might very well be that the fishes swimming around the oceans during the Silurian already had values and disvalues. But go back another few hundred million years and there might not even have been that. There was still life, which might have some intrinsic value, but much less of it.

Where I’m getting at with this is that value might have emerged in a stepwise way: We started with plasma near the Big Bang, and then eventually got systems that might have intrinsic value because of complex life, and then maybe systems that get intrinsic value because they have consciousness and qualia, and maybe another step where we get justice and thinking about moral stuff. Why does this process stop with us? It might very well be that there are more kinds of value waiting in the wings, so to say, if we get brains and systems that can handle them.

That would suggest that maybe in 100 million years we find the next level of value, and that’s actually way more important than the previous ones all taken together. And it might not end with that mysterious whatever value it is: there might be other things that are even more important waiting to be discovered. So this raises this disturbing question that we actually have no clue how the universe ought to be organised to maximise value or doing the right thing, whatever it is, because we might be too early on. We might be like a primordial slime thinking that photosynthesis is the biggest value there is, and totally unaware that there could be things like awareness.

The (far) future of war

Anders Sandberg: Why does anybody go to war? There is actually a serious debate about the rationality of war and there is serious disagreement about the motives. It’s a bit unclear to me whether you would see advanced civilisations going to war. I think you sometimes can sketch out possibilities. You could imagine the radical negative utilitarian civilisation not wanting that other civilisation to have a lot of resources because they are actually causing pain and suffering, even though they are saying that on average they’re making things better. They would have a reason to try to remove resources from that pain-inducing civilisation, and they would make very bad neighbours.

Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody’s sitting on Mars and you’re going to war against them, it’s very hard to hit them. You don’t have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it’s going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it’s actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you’re in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast.

So my general conclusion has been that war looks unlikely on some size scales but not on others. It might be that as you move out into space, it becomes at first much harder. But then you learn how to move better over interstellar distances, which means that each solar system is actually easily accessible. And the solar system is hard to have several parties inside that fight each other. Once you reach the galactic scale, it might again take so much time to set up a conflict. But again, it might vary. It’s very unclear, and it actually depends partially on physics.

On the largest scales, the universe looks very defence-dominant simply because everything is moving slowly apart from each other, so you can’t even send light signals telling other parts of your civilisation, “We declared war on the Zorgons.” So it might be that the universe at the very largest scale is very peaceful. Even a doomsday weapon like false vacuum decay is only a local problem; it cannot actually destroy everything simply because everything is expanding apart.

Black hole power

Rob Wiblin: After the era of stars — the Stelliferous Era, as it’s called — so after most of the stars are burned out, and the universe is kind of getting very cold, what options remain for extracting lots of energy to do things?

Anders Sandberg: At that point, there is still a fair bit of fusion energy you could get, because there are a lot of brown dwarfs that are still hanging around. They just were too light to ever turn into a star. So in theory, you could mine them for hydrogen and burn that if you have a fusion reactor.

The funny thing is that also, in the really long run, they are also randomly occasionally bumping into each other and forming little red dwarf stars. That’s a very inefficient process, but over very long time periods it actually does happen. But I think intelligent life would not be patient enough for that.

So what you probably want to do is that you burn the fusible elements, either in your fusion reactor or by dropping them on top of, for example, a white dwarf star or a neutron star. This has a bit of a limit, because once you add enough, the white dwarf star collapses gravitational and turns into a supernova. So there is that slight environmental problem.

The best method, in my opinion, is to use black holes. I’m very fond of black hole power. And I am assuming that maybe in a few trillion years I’m going to be dealing with protesters saying, “No black holes in our neighbourhood,” and “Don’t build that power plant, Anders.” But they’re actually lovely. Black holes have accretion disks when they suck in matter. Or rather, it’s not that they suck in matter — that’s kind of a picture we get from science fiction — they’re just an object with gravity like anything else. But what happens when you put a lot of junk around a black hole? They form a disk, and the friction between parts of the disk heats up the matter. That means it radiates away energy and gets more tightly bound and slowly spirals in. There is also some angular momentum leaking out at the sides where some dust gets thrown off.

The effect of this is that the potential energy of that junk — and it can be anything: burnt-out stars, old cars, old space probes, planets you don’t care for, et cetera — gets ground down, and the potential energy gets released as radiation. So now you can build a Dyson sphere, a very big one, around this whole system, and get all of that energy.

How much total mass energy can you get? It turns out it’s almost up to 40% for a rapidly spinning black hole. The exact limit depends on where the inner edge of the accretion disk is, because eventually you get close enough that you essentially fall straight in without releasing any more energy, and that gets trapped inside the black hole. Now, converting 40% of the mass energy of old cars and space probes into energy is kind of astonishing: that is way more effective than fusion. So actually, the stars might not be the biggest energy source around. We might actually be able to make the galaxies shine much more if we dump things into black holes and gather that energy.

Grabby aliens

Rob Wiblin: If life spreads through the universe, the overwhelming majority of beings will live in this totally different world very far in the future — well, not necessarily that far in the future, but in a world where complex life is spread across most of the accessible universe. So our position will seem shockingly strange and really early. What’s your favourite explanation for how it is that we find ourselves in this unusual and, in some sense, arguably kind of privileged position?

Anders Sandberg: I think this is a very important and tricky question. It’s also worth noticing that the Stelliferous Era, where there are stars, is going to last maybe 10 to 100 trillion years — and we are in the first 13 billion years. Again, what’s going on here? Why are we really early? There I think you can make an argument that most of the biosphere years you could imagine in the future are going to be around little red dwarf stars that might not be as habitable as we currently think they could be. So maybe actually we are close to peak habitability for organic life in the universe, and we shouldn’t be too surprised about that.

But still, if technological civilisation spreads, then of course those red dwarf stars are going to be totally good real estate. And you could argue that maybe this is evidence that actually nobody’s going to spread across the universe. Actually, this is it. This early part of the Stelliferous Era is where intelligence shows up, and maybe you can’t spread for some weird reason across the universe.

But another interesting answer, which I’m rather fond of, is Robin Hanson’s grabby aliens idea. I’m particularly fond of it because I almost had the idea but didn’t. I had all the pieces — I have a chapter in the book where I’m talking about alien intelligence, various explanations, expansion patterns, and all of that — I had all the pieces laid out in front of me. But Robin actually was the one putting it together, and said if civilisations start spreading out, presumably in the areas where they have spread, new intelligent species don’t arise. It’s just going to be whoever had gone there, and whatever they do. We are not in one of those zones.

Now, if you look at the history of the universe, you have this kind of phase transition of a universe with no intelligent life spreading; a relatively short period where there is a fair bit of intelligent life in transit, expanding out; and then eventually they meet each other and all parts of space are now settled. That means that we are in this kind of weird position that we’re quite close to that limit. And if there are many hard evolutionary transitions to get to intelligence, you should expect intelligence to show up as late as possible in the history of a biosphere. I have some papers to that effect, so I’m totally in agreement with this.

In that case, we should expect to be relatively close to this transition. This transition is still probably billions of years long, so we’re talking astronomical timescales. But I like the grabby aliens argument because it both explains why we haven’t seen any aliens — the aliens that are quiet are hard to see, they’re not expanding, they’re just sitting there enjoying life; and the expansive ones, we haven’t met with them yet because we just started expanding about this time, and we might start noticing them in a billion years or so when we might also be expanding — and this also explains why we are around now.

It still has this big problem: Why aren’t we part of some posthuman super-civilisation after we contacted the grabby aliens in a few billion years? And maybe the answer is maybe we all form one big group intellect, and out of the trillion human beings that ever existed, the group intellect that exists forever after that time is just one of us. The probability is one in a trillion of being the group intellect. So we found ourselves being among the more normal boring humans before contact. That might be an explanation, although I’m not convinced by it.

Chances that aliens have actually visited Earth

Anders Sandberg: I looked a little bit into it, and I’m not particularly convinced. So, UAPs: Why are we seeing these blurry, weird things? There could be a lot of different reasons for that, and people immediately latch on to one possible explanation: It’s aliens. Why aren’t they talking about angels, or superintelligent squid from the bottom of the ocean? There is a very long list of possible explanations, including the super boring: there are optical effects in the complex lens systems on modern warplanes.

In some cases, footage of UAPs have turned out to have very weird natural explanations. Like in one case, it was a Batman-logo-shaped balloon up among the clouds. What’s the probability of even seeing that from a plane? That’s kind of low. There is a lot of strange random stuff. So when you see something strange, you need to update your beliefs. And if you try to be a good Bayesian about it, you need to check what hypothesis is this compatible with? So if I see a blurry spot of light moving very fast, it both fits with aliens having a super-advanced spacecraft, but it also fits quite well with some weird problem with my optics — as well as a long list of the other weird possibilities, ranging from the squid over to that I’m actually hallucinating.

Now, if I see a little green man on my lawn telling me, “Take me to your leader,” suddenly a lot of those other explanations go away. Not all of them. The probability of me going crazy is still embarrassingly high. So I should probably ask my friends, “Do you see that little green guy too?” And if they all agree, then the probability of all us going crazy simultaneously is low. There is still some possibility for a prank or something, but you need rather specific evidence. Seeing weird things moving around doesn’t tell us very much. And I think, unfortunately, we latch on to this explanation.

The fact that there are hearings and there are surprisingly credible sources saying this, I think these credible sources are an interesting thing to check. How likely is it that they know what they’re talking about? Because there has been a lot of very crazy stuff going on in the US intelligence and military establishment too, driven by people with various bees in their bonnets about particular threats.

So I’m not terribly convinced by this. The really interesting issue is, of course, it’s still not implausible that advanced civilisations exist. And if they wanted to hide, could they hide from us? And I think if you’re an advanced [enough] civilisation and have your act together, you could hide really well. So in that case, why would we be seeing blurry things moving around? On the other hand, you could also imagine that maybe you had an advanced civilisation, but there are teenagers taking the saucer out for a spin — and they are trying to keep a non-interference activity going, but there are these people messing around, which would of course also explain a lot of the stupidities with many of these UAP observations.

But I don’t think that sounds super plausible, actually. I think it’s a bit more binary than that. Still, I think it’s worth recognising that the world is strange, and full of a lot of unlikely and strange things. The bigger our world gets, the more things just out of sheer randomness that is just simply unbelievable will just keep on increasing. So it’s going to be hard to filter all of this.

The lifespan of civilisations

Rob Wiblin: A listener wrote in with another question that’s a bit related to the book. It was a question of do civilisations eventually decay and become more likely over time to break apart. “I saw that you’d published a book chapter titled ‘The lifespan of civilizations: Do societies “age,” or is collapse just bad luck?’ but I couldn’t get the book. What’s the answer? Do societies get more likely to collapse the longer they last for?”

Anders Sandberg: I don’t think so. And that is actually the point of that chapter, which is a spinoff from my big book, because when I was going through the calculations of how to move galaxies and do all of this stuff, I realised that maybe the big limitation here is not physics, but society. If you need to have a project team that keeps the move of a galaxy going for a billion years, how likely is that to last? I mean, most organisations don’t last very long in the present.

And indeed, if civilisations inexorably collapse after a while because they age and become decadent, then maybe that is the fundamental limitation of how grand of futures we could possibly have. So I started reading macro history, and realised macro historians make very compelling stories about why civilisations rise and fall and why history has a certain shape, but they’re all different and they’re all kind of contradictory. So I became a bit nervous about trusting any of them.

So then I just took a lot of data and started doing curve fitting to try to see the survival curves. And the thing I found that was the best fit I could find for civilisations was exponential decay. There is a kind of time constant for how likely a civilisation is going to be around for, there is a kind of half-life for civilisations. But the risk of a civilisation collapsing doesn’t seem to increase with time, which is the important part. If there was some kind of decadence building up or maybe some environmental debt or something else, then you should expect that over time it became more likely that it crashed.

Or you could have that there may be some childhood disease of civilisations, that when they first show up they have a high likelihood of crashing. We don’t see that. That might partially be that we have a selection bias: that we don’t think about stuff that crashed immediately as a civilisation. But this seems to apply also to other forms of polities, like kingdoms in Europe and various political states. In the case of corporations, it’s kind of well known that they also have a fairly constant hazard rate, except for the startup phase where they’re very vulnerable. It’s fairly constant, except for the very oldest corporations in the world that tend to be very stable: typically a Japanese inn at a hot spring or some brewery or something exploits that resource that people always will want to have.

So using this data, my conclusion seems to be that civilisations probably collapse because of bad luck rather than there is something bad building up. Now, that is still an interesting open question: Why do we have this bad luck? Is it just that it’s very unlikely events that conspire to bring things down, or is it that there is something intrinsic? And even worse: of course bad luck is rather hard to defend against. You can imagine a Dyson sphere covered with rabbits’ feet and horseshoes, hoping to ward off bad luck. But that’s unlikely to work. Probably the best way of warding off bad luck is having multiple copies, having backup civilisations — and if one crashes, the other ones shake their heads, pick up the pieces, and resettle that part of space.

A lot of people talk about the flourishing of civilisation or a young civilisation or an old civilisation — and we quite often anthropomorphise societies and civilisations way more than is good. Rousseau was talking about “diseases of civilisation,” and he was literally thinking that some bad things in society were like a literal disease in the body of a civilisation. Once you start thinking like that, of course ageing seems to be reasonable. But it’s worth noting that a lot of multicellular life doesn’t age.

Humble futures vs. grand futures

Anders Sandberg: It’s very interesting to think about not grand futures, but humble futures — because a lot of people are totally cold to the idea of moving galaxies and having trillions of beings in some weird astronomical future. I usually express it like they want to have this nice little Cotswolds village — where their friends are playing cricket, they’re having tea with the vicar, and having sensible social relations with normal people. And yeah, it needs to be sustainable and peaceful and all of that, but you don’t need an entire galaxy to do that. I think there is a lot of truth to that. This is quite close to what most people think is a good life, and it’s certainly much easier to think about virtue ethics in that little British village, or whatever the Swedish or Chinese counterparts are.

The real question is, of course, would it be good to just have that? I tend to think that we are so uncertain about normativity that we should hedge our bets. I think it’s actually probably a better idea that some people are living in these nice little humble futures and others go off and terraform planets and build Dyson spheres and whatnot — because we might not know which one of these is the right one, but we might be able to get the right one by having a big palette of possibilities.

The real problem is when they impinge on each other: the nice little village might not want their night sky scarred by having megastructures flying around there, so there might have to be some deal about leaving the sky dark, et cetera. There are some people who are very upset that anybody in the world might be having fun in the way they morally disapprove of. So they have nosy preferences, and they’re of course going to be very annoying neighbours. And we need to resolve these kinds of problems.

That gets into this issue of how do you make a cosmopolitan ethics, especially if humanity becomes much more diverse? But I’m kind of cheered by the fact that the Amish seem to be doing pretty well. They are living in some sense in a humble world, deliberately making a humble society, but it’s also being protected by one of the least humble societies you can possibly imagine: the United States. And they have a right kind of relationship to the outside. Over the decades there have been interesting discussions about both how to prevent too many young people going off into the sinful outer world and realising that this is actually quite wonderful, and instead setting up things so they can both maintain each other. And it works, partially because the values of the United States and the rights catalogued in the laws and the rule of law can act to protect it. You can maintain humility and a humble future inside something much more grand.

I guess this might also be the solution for how to get virtue in these grand futures. It might actually start out at small nuclei. You actually don’t want to go maximise the universe: you want to ensure that this nuclear virtue, that if they’re really good and attractive, might expand — instead of saying first we optimise everything for it.

So this gets to one of my big things, and that is we need to have an open future. Existential risk is an ultimate closed future. It’s the end of history. But you can also imagine futures that are too limited, where there are too few possibilities and certain choices and options are not there. And I think we need to safeguard against those, even if they’re otherwise pretty nice futures.


Will Aldred @ 2023-12-12T12:08 (+2)

I notice I’m confused by what Anders says about the offence-defence balance.

The argument, as I understand it, is that in the far future there’ll be a lot of space—lightyears, perhaps—between warring factions/civilizations. Offensive attacks therefore won’t work well because, with all the distance the offensive weapons need to cover, the defenders will have plenty of time to block or move out of the way.

But… this relies on the defenders seeing the weapons approaching, no? And I would expect weapons of the far future to travel at or very close to the speed of light,[1] making it impossible to see them coming until they’ve already hit you. (Which would mean that the balance favours offence, not defence.)

This seems like a basic enough point, though, that I’m sure it’s part of Anders’ thinking already; I expect I’m missing something.

  1. ^

    e.g., high-powered lasers, other types of directed-energy weapons, projectiles accelerated via thermonuclear reaction, pion drive, or artificial black hole

ElliotJDavies @ 2023-10-23T18:02 (+2)

I thoroughly enjoyed this episode. I am not always sympathetic to tech-utopianism, as I feel enthusiasts don't always "read the room" regarding all of the challenges and suffering that are currently present. But I was impressed by how thoughtful, considerate and elucidating Anders was throughout.