Short summary of What We Owe The Future
By finm @ 2023-02-12T16:27 (+29)
This is a linkpost to https://finmoorhouse.com/writing/wwotf-summary/
This is a linkpost for a summary of the book What We Owe The Future by William MacAskill. I've copied the ‘overall summary’ section, and the remainder of the linked post summarises individual chapters in more detail.
Although the text hews lazily close to the source in many cases, I take responsibility for any inaccuracies or misinterpretations — this isn’t an official summary or anything like that.
We live at a pivotal time. First, we might be living only at the very beginning of humanity’s entire history. Second: the entire future, and the fortunes of all those people who could live there, could hinge on decisions we make today.
What We Owe The Future is about an idea called longtermism: the view that we should be doing much more to protect the interests of future generations. In other words, improving the prospects of all future people — over even millions of years — is a key moral priority of our time.
From this perspective, we shouldn’t only focus on reversing climate change or ending pandemics. We should try to help ensure that civilization would rebound if it collapsed; to prevent the end of moral progress; and to prepare for a world where the smartest people may be digital, not human.
Imagine every human standing in a long succession: from the first ever homo sapiens emerging from the Great Rift Valley, to the last person in this continuous history to ever live. Now imagine compressing all these human lives into a single one. We can ask: where in that life do we stand? We can’t know for sure. But here’s a clue: suppose humanity lasted only a tenth as long as the typical mammalian species, and world population fell to a tenth of its current size. Even then, more than 99% of this life would lie in the future. Scaled down to a single typical life, humanity today would be just 6 months old.
But humanity is no typical mammal. We might well survive even longer than that; for hundreds of millions of years until the Earth is no longer habitable, or far beyond. If that’s the case, then humanity has just been born; just seeing light for the first time.
Knowing how big the future could be, are there ways we can help make sure it goes well? And if so, should we care? These are the central questions of longtermism. This book represents over a decade’s worth of full-time work aimed at answering them.
The book comes up with some striking answers. First, it argues that we are living at a moment of remarkable importance. One major line of argument notes how present-day rates of change and growth are unprecedentedly high. From 10,000 BC onwards, it took millennia for the world economy to double in size; but the most recent doubling took just 19 years. But we know this rate of growth cannot continue indefinitely. That means that either humanity must begin to stagnate; or else soon begin its biggest growth spurt ever. Plus, when we consider some of the causes and consequences of this growth — like fossil-fuel power, nuclear weapons, man-made pathogens, and advanced artificial intelligence — we see that these very technologies have the power to alter the course of the future, depending on how they’re built and managed.
A second reason for thinking we live at an unusually influential time comes from history: from case studies of how unassuming figures, finding themselves in moments of upheaval and plasticity, shaped the values that guide the future. Nothing illustrates this better than the story of abolitionism. It’s natural to think that slavery was bound to end in the 19th century because of economic forces. But the records show that this isn’t so clear. In fact, were it not for the dedication and foresight of a small group of moral radicals, slavery might have remained ubiquitous to the present day.
Because values which now seem utterly unconscionable seemed so natural to our ancestors, we should expect that most of us are only dimly aware of values which our ancestors will be shocked to learn we hold. If people today can shape the values that last into our future, then we should draw inspiration from those early abolitionists and push for progress on today’s ethical frontiers.
History also teaches us how, once values are chosen, they can become locked-in for vast stretches of time. For instance, we can learn from the rise of Confucianism in the Chinese Han dynasty. At first, Confucianism was a relatively obscure school of thought. But once it became so unexpectedly influential, it remained so for over a thousand years.
But if value lock-in occurs this century, then it could last longer than ever. In fact, this book argues that some set of values might soon come to determine the course of history for many thousands of years. The reason is artificial general intelligence: AI that is capable of learning as wide an array of tasks as human beings can, and which can perform them to at least the same level as human beings. We shouldn’t write this off as a fantasy. In fact, the evidence we have makes the prospect of AGI this century impossible to ignore. And, the argument goes, advanced AI could enable particular values to become locked-in. If that set of values doesn’t benefit humanity, then we will have lost the vast potential ahead of us.
Another way to lose out on our potential is for humanity to go prematurely extinct. In 1994, comet Shoemaker-Levy-9 slammed into the side of Jupiter with the force of 6 million megatons of TNT, equivalent to 600 times the world’s nuclear arsenal. Threats from asteroids to life on Earth no longer seemed so hypothetical. So, in 1998, Congress gave NASA the funding to track down more than 90% of all asteroids and comets larger than 1 kilometre within a decade. The effort was called Spaceguard, and it was an overwhelming success. Shoemarker-Levy-9 taught us that threats to humanity’s survival are real, but Spaceguard taught us that it’s possible to protect against the causes of our own extinction.
Unfortunately, asteroids do not appear to pose the largest threat of human extinction. Much more concerning is the possibility of artificial pandemics: diseases that we ourselves will design, using the tools of biotechnology. Biotechnology has recently seen breathtaking progress, and the tools of synthetic biology will soon become accessible to anyone in the world. We are fortunate that the fissile material required for nuclear weapons is difficult to manufacture, and relatively easy to track. Not so for artificial pathogens. While the tools of artificial pandemics remain relatively hard to access, we’ve already seen an embarrassing number of lab leaks and accidents. If we are going to avoid disaster in the future, we need to be far more careful.
But human extinction isn’t the only way we throw away our entire potential — civilization might instead simply collapse irrecoverably. To know how likely permanent collapse is, we need to know about the fragility of civilization, and its chances of recovery after a collapse. This book finds that humanity is capable of remarkable resilience. Consider the atomic bombing of the Japanese city of Hiroshima in 1945. The immediate destruction was enormous: 90% of the city’s buildings were at least partially incinerated or reduced to rubble. But in spite of the devastation, power was restored to Hiroshima’s rail station and port within a day, and to all homes within two months. The Ujina railway line was running the day after the attack, streetcars within three days, and water pumps within four. Today Hiroshima is a thriving city once again.
However, civilizational collapse today would be different to those of the past in a crucial respect: we have used up almost all of the most readily accessible fossil fuels at our disposal. Historically, the use of fossil fuels is almost an iron law of industrialisation. In this way, the depletion of fossil fuels might hobble our attempts to recover from collapse. That gives us an underappreciated reason for keeping coal in the ground: to give our ancestors the best shot at recovering from collapse.
However, more likely than collapse is the prospect of stagnation. History shows a long list of great flowerings of progress: the explosion of knowledge in the Islamic Golden Age centred in Baghdad, the engineering breakthroughs of the Chinese Song dynasty, or the birth of Western philosophy in ancient Greece. But all these periods were followed by sustained slowdown; even decline. As a global civilization, are we heading toward a similar fate? It’s hard to rule that possibility out. For instance, the economic data show that ideas are becoming harder to find, and demographic evidence suggest a sharp decline in fertility rates in many parts of the world.
Why would stagnation matter? Because of technologies like artificial pathogens and nuclear weapons, the next century could be the most dangerous period humanity faces for the rest of its future. A period of global stagnation would mean getting stuck in this ‘time of perils’. The idea of sustainability is often associated with trying to slow down economic growth. But if a given level of technological advancement is unsustainable, then that is not an option: in this case, it may be slowdowns in growth that are unsustainable. Our predicament could be like that of a climber, stranded on the cliff-face with the weather turning, but one big push from the summit. Just staying still is a bad idea: we might run out of energy and fall. Instead, the only option is to press on to the summit.
But what waits at that summit? And what’s so important about reaching it? Many people suspect that human extinction wouldn’t, in itself, be a bad thing. But some philosophical arguments suggest that we should care, morally, about letting many more people to flourish far into the future. These arguments suggest that failing to achieve a bright future would not be a matter of indifference; but rather a great tragedy.
And although it sounds like science fiction, that future could be astronomical in scale. Though Earth-based civilization could last for hundreds of millions of years, the stars will still be shining in trillions of years’ time, and a civilization that was spread out across many solar systems could last at least this long. But our galaxy is one of just twenty billion galaxies we could one day reach. Just as we owe our lives to the early humans who ventured beyond their homes, it could be of enormous importance to keep this future open — and one day achieve it.
The long-run future could be huge in scope, but could it actually be good? The present state of the world isn’t encouraging: most people still live on less than $7 per day, every year millions die from easily preventable diseases, and millions more are oppressed and abused. Plus, nearly 80 billion vertebrate land animals are killed for food every year, living lives of fear and suffering. We should clearly not be content with the world as it is. But MacAskill does not argue that we fight to spread this world, with all its ills, far into the future. Instead, the hope is that we can — and will — do far better. One reason to expect a better future is the simple observation that almost everyone wants to live in a better world — and as our technological capacity continues to progress, that world comes ever closer within our reach. New technologies bring new powers to right some of the past’s wrongs: to continue raising living standards for everyone, to replace the horrors of industrial animal farming with clean meat, and far beyond.
There’s still so much we don’t know about improving the long-term future. It is as if we’re setting off on a long expedition, without a map, and trying to peer through a thick fog. But there are rules of thumb we can follow: take the good actions we’re confident are good; keep our long-term options open; and try to learn more about considerations which are crucial for our decisions. Moreover, if you want to make a difference as an individual, then the most ethically important decision you will ever make is your choice of career. If you want to help positively influence the long-term future, look for important, solvable, and neglected problems to work on.
But can you really contribute to making a difference? In short: yes. Abolitionism, feminism, and environmentalism were all “merely” the aggregate of individual actions. And because still very few people are working on projects to help improve the long-run future, you shouldn’t just assume that other people already have it covered.
If this book is right, then we face a big responsibility:
Relative to everyone who could come after us, we are a tiny minority. Yet we hold the entire future in our hands.
That was a short summary of the entire book. You could stop reading here, or you could read the full post for summaries of each of the book’s chapters. I'm open to suggestions to just copy-paste all the chapter summaries onto this Forum post also.
slg @ 2023-02-12T16:43 (+8)
Thanks for writing this up. I just wanted to note, the OWID graph that appears while hovering over a hyperlink is neat! @JP Addison or whoever created that, cool work.
Joyce Alvino @ 2023-02-14T13:41 (+5)
This was reallyhelpful. I've been hoping to read the book as soon as I can lay my hands on a copy. Your article was very helpful, but now I want, more than ever, to read the full book lol
AndreFerretti @ 2023-02-14T08:23 (+3)
Very useful ! Instead of re-reading the longer explanation of value lock-in from the book, I found this brief explanation here, and it was just what I needed :)