Review: What We Owe The Future

By Kelsey Piper @ 2022-11-21T21:41 (+165)

This is a linkpost to https://asteriskmag.com/issues/1/review-what-we-owe-the-future

For the inaugural edition of Asterisk, I wrote about What We Owe The Future.  Some highlights: 

What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.

Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”

As it happens, I think we can — but not through the pathways outlined in What We Owe the Future.

 

The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.

...

I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.

If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem.


Fermi–Dirac Distribution @ 2022-11-21T23:48 (+52)

Here is my attempt at summarizing the main points:

In What We Owe the Future, MacAskill agrees with other longtermists about the moral importance of the long-term future, but disagrees with most of them about how best to affect it. Relative to other longtermists, MacAskill thinks that affecting societal values is more important and preventing AI-triggered extinction is less important. Also, MacAskill’s recommendations for how to influence the long-term future seem to have been researched less thoroughly than other parts of the book.

Corentin Biteau @ 2022-11-22T10:43 (+19)

Thank you ! This was a very good post that pointed out many very important points (and well written too).

I really like this section: 

The first and most fundamental lesson of effective altruism is that charity is hard. Clever plans conceived by brilliant researchers often don’t actually improve the world. Well-tested programs with large effect sizes in small, randomized, controlled trials often don’t work at scale, or even in the next village over.

These questions are not unanswerable. Through the heroic work of teams of researchers, many of them have been answered — not with perfect accuracy, but with enough confidence to direct further research and justify further investment. The point isn’t that everything is unknowable; the point is just that knowing things is hard.

This is a dose of humility that felt deeply needed. Especially after the FTX debacle, where it's pretty clear that we are bad  at predicting the near-term future (not just EA, about everyone), so predicting the long-term future accurately, and what  might affect it, sounds seriously intractable.

This tweet summarized that for me:

FTX would be an extremely high profile example of "EA cannot manage tail risks despite longtermism revolving around managing tail risks"

So thanks for the reminder that we should keep doing things that are more specific that "changing values".

 

Another worry I have is that longtermism (in its current state) assumes that our current industrial society can last for millenias, despite the fact that it heavily relies on finite materials and energy sources. I wrote a post on energy depletion and limits to growth, and I fear longtermists do not take that into account.

WilliamKiely @ 2022-11-23T21:39 (+7)

I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die.

@Kelsey, by "likely" here do you mean >50%?

And specifically, are you >50% on extinction from AI in the next 100 years? (even though you didn't say AI or the next 100 years in that sentence, I assumed that's what you had in mind based on earlier context).

Also totally fine if you don't want to share your exact credence publicly. (I haven't reflected on whether that seems like a good or bad thing for you to do.)