Odds of recovering values after collapse?

By Will Aldred @ 2022-07-24T18:20 (+65)

Question

Let's say we roll the dice 100 times with respect to values. In other words, let's say civilization collapses in 100 worlds, each very similar to our current world, and let's say full tech recovery follows collapse in all 100 of these worlds.

In how many of these 100 worlds do you think that, relative to pre-collapse humanity, the post-recovery version of humanity has:

I encourage the reader to try answering the question before looking at the comments section, so as to not become anchored.

Context

Components of recovery

It seems, to me, that there are two broad components to recovery following civilizational collapse:

  1. P(Tech Recovery|Collapse)
    • i.e., probability of tech recovery given collapse
    • where I define "tech recovery" as scientific, technological, and economic recovery
  2. P(Values Recovery|Tech Recovery)
    • i.e., probability of values recovery given tech recovery
    • where I define "values recovery" as recovery of political systems and values systems
      • (where "good" on the values axis would be things like democracy, individualism, equality, and secularism, and "bad" would be things like totalitarianism)

It also seems to me that P(Tech Recovery|Collapse) ≈ 1, which is why the question I've asked is essentially "P(Values Recovery|Tech Recovery) = ?", just in a little more detail.

Existing discussion

I ask this question on values recovery because there's less discussion on this than I would expect. Toby Ord, in The Precipice, mentions values only briefly, in his "Dystopian Scenarios" section:

A second kind of unrecoverable dystopia is a stable civilization that is desired by few (if any) people. [...] Well-known examples include market forces creating a race to the bottom, Malthusian population dynamics pushing down the average quality of life, or evolution optimizing us toward the spreading of our genes, regardless of the effects on what we value. These are all dynamics that push humanity toward a new equilibrium, where these forces are finally in balance. But there is no guarantee this equilibrium will be good. (p. 152)

[...]

The third possibility is the “desired dystopia.” [...] Some plausible examples include: [...] worlds that forever fail to recognize some key form of harm or injustice (and thus perpetuate it blindly), worlds that lock in a single fundamentalist religion, and worlds where we deliberately replace ourselves with something that we didn’t realize was much less valuable (such as machines incapable of feeling). (pp. 153-154)

Luisa Rodriguez, who has produced arguably the best work on civilizational collapse (see "What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?"), also only very briefly touches on values:

Values is the other one. Yeah. Making sure that if we do last for a really long time, we don’t do so with really horrible values or that we at least don’t miss out on some amazing ones. (Rodriguez, Wiblin & Harris, 2021, 2:55:00-2:55:10)

Nick Beckstead and Michael Aird come the closest, as far as I've seen, to pointing to the question of values recovery. Beckstead (2015):

Aird (2021):

(emphasis added)

Clarifications

Acknowledgements

This question was inspired by conversations with Haydn Belfield and Hannah Erlebach (though I'm not certain both would endorse the full version of my question).


Jack_S @ 2022-07-24T22:05 (+20)

Thanks for writing this up. This question came up in a Precipice reading group I was facilitating for last year. We also used the idea that collapse was 're-rolling the dice' on values, and I think it's the right framing. 

I recall that the 'better values' argument was:

The 'worse values' argument was:

We also discussed the argument that, if you're a longtermist who is very concerned about x-risk and you're confident (~70+%) that we would develop better values post-collapse, this may lead to the uncomfortable conclusion that collapse might be morally okay or desirable.

If I had to put a number on my estimates, I'd probably go for 55% better, 45% worse, with very high variation (hence the lack of a 'similar' option). 

Davidmanheim @ 2022-08-08T12:34 (+4)

"We should not assume that our current values are average, we should rather assume that we've been uncommonly lucky"

Why? That seems like a very weird claim to me - we've seen evolution of moral reasoning over time, so it seems weird to claim we wouldn't see similar evolution a second time.

Jack_S @ 2022-08-09T20:40 (+3)

The claim that we wouldn't see similar evolution of moral reasoning a second time doesn't seem weird to me at all. The claim that we should assume that we've been exceptionally / top 10%- lucky might be a bit weird. Despite a few structural factors (more complex, more universal moral reasoning develops with economic complexity), I see loads of contingency and path dependence in the way that human moral reasoning has evolved. If we re-ran the last few millennia 1000 times, I'm pretty convinced that we'd see significant variation in norms and reasoning, including:

  1. Some worlds with very different moral foundations- think a more Confucian variety of philosophy emerging in classical Athens, rather than Socratic-Aristotelian philosophy. (The emergence of analytical philosophy in classical Athens seems like a very contingent event with far-reaching moral consequences).
  2. Some worlds in which 'dark ages', involving decay/ stagnation in moral reasoning persisted for longer or shorter periods, or where intellectual revolutions never happened, or happened earlier. 
  3. Worlds where empires with very different moral foundations than the British/ American would have dominated most of the world during the critical modernisation period.  
  4. Worlds where seemingly small changes would have huge ethical implications- imagine the pork taboo persisting in Christianity, for example. 

The argument that we've been exceptionally lucky is more difficult to examine using a longer timeline. We can imagine much better and much worse scenarios, and I can't think of a strong reason to assume either way. But with a shorter timeline we can make some meaningful claims about things that could have gone better or worse. It does feel like there are many ways that the last few hundred years could have led to much worse moral philosophies becoming more globally prominent- particularly if other empires (Qing, Spanish, Ottoman, Japanese, Soviet, Nazi) had become more dominant. 

I'm fairly uncertain about this later claim, so I'd like to hear from people with more expertise in world history/ history of moral thought to see if they agree with my intuitions about potential counterfactuals.

Davidmanheim @ 2022-08-10T06:57 (+5)

I agree  that if we re-ran history, we'd see significant variations, but I don't think I have any reason to think our current trajectory is particularly better than others would be.

For example,  worlds where empires with very different moral foundations than the British/ American could easily have led to a more egalitarian view of economics, or a more holistic view of improving the world far earlier. And some seemingly small changes such as persistence of pork taboos, or adoption of anti-beef and vegetarian lifestyles as a moral choice don't seem to lead to worse outcomes.

But I agree that it's an interesting question for historians, and I'd love to see someone do a conference and anthology of papers on the topic.

Davidmanheim @ 2022-07-25T19:00 (+13)

If fewer than 99% of humans die, I suspect that most of modern human values will be preserved, and so aside from temporary changes, I suspect values would stay similar, and potentially continue evolving positively, albeit likely with a delay and at a slower pace - but there would be a damaging collapse of norms that might not be recoverable from.

Charles_Guthmann @ 2022-07-25T04:11 (+7)

I think that there is a question that is basically a generalization of this question, which is:

Will the mean values of grabby civilizations be better or worse than ours?

Linch @ 2022-07-24T19:46 (+7)

I have some thoughts on this but I think they aren't ready for prime-time yet. Happy to maybe do a call or something when both of us are free.

Zach Stein-Perlman @ 2022-07-24T19:43 (+7)
Will Aldred @ 2022-07-24T18:21 (+6)

My response to the question:

Zach Stein-Perlman @ 2022-07-24T19:57 (+6)

This is more pessimistic than I expected/believe. (I didn't post my own answer just because I think it depends a lot on what collapse looks like and  I haven't thought much about that, but I'm pretty sure I'd be more optimistic if I thought about it for a few hours.) Why do you think we're likely to get worse values?

Charles_Guthmann @ 2022-07-25T04:06 (+4)

I like this question/ think that questions that apply to most x-risks are generally good to think about. a few thoughts/questions:

I'm not sure this specific question is super well-defined.

 

Then this is sort of a nit-picky point, but how big of a basket is similar?

To take a toy example, let's say values is measured on a scale between 0 and 100 with 100 being perfect values. Let's further just assume we currently are at a 50. I'd assume in this case it would make sense to make similar =(33,67)  so as to evenly kern the groupings. if say, similar = (49,51), then it seems like you shouldn't put much probability into similar. 

But then if we are at a 98/100, is similar (97,99)? It's less clear how we should basket the groups. 

Since you put similar as 20/100 I somewhat assumed that you were giving similar a more or less even basket size to worse and better but perhaps you put a lot of weight into the idea that we are in some sort of sapien cultural equilibrium.

For what it's worth, if we sort of sweep some of these concerns aside, and assume similar has about as much value space as better and worse, my estimates would be as follows:

But I agree with jack's sense that we should drop similar and just go for better and worse, in which case:

A cold take, but I truly feel like I have almost no idea at current. My intuition is that your forecast is too strong for the current level of evidence and research, but I have heard very smart people give almost the exact same guess. 

Will Aldred @ 2022-07-24T18:21 (+4)

On Loki's Wagers: for an amusing example, see Yann LeCun's objection to AGI.

Sharmake @ 2022-07-25T20:24 (+1)

My answers for them (From an EA perspective) is generally the following:

Same values is 30% chance

Worse values 69% chance

Better Values 1% chance