Predicting what future people value: A terse introduction to Axiological Futurism

By Jim Buhler @ 2023-03-24T19:15 (+62)

Why this is worth researching

Humanity might develop artificial general intelligence (AGI)[1], colonize space, and create astronomical amounts of things in the future (Bostrom 2003MacAskill 2022Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future? 

While this depends on many factors, a crucial one will likely be the values of our successors

Here’s a position that might tempt us while considering whether it is worth researching this topic:

Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.

While I see how this can seem compelling, I think this is very ill-informed. 

First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).

Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me.[2] We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are so special that we should drop that prior.

Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.

How to research this

Here are examples of broad questions that could be part of a research agenda on this topic:

John Danaher (2021) gives examples of methodologies that could be used to answer these questions.

Also, my Appendix references examples and other relevant work, including subsequent posts in this sequence.

Acknowledgment

Thanks to Anders Sandberg for pointing me to the work of John Danaher (2021) and for our insightful discussion on this topic. Thanks to Elias Schmied for other recommendations. Thanks also to M. Victoria Calabrese for her stylistic suggestions. My work on this sequence so far has been funded by Existential Risk Alliance

All assumptions/claims/omissions are my own. 

Appendix: Relevant work

(This list is not exhaustive.[3] More or less ranked by decreasing order of relevance.)

  1. ^

    Or something roughly as transformative.

  2. ^

    A sudden value lock-in with an AGI developed and deployed in the next years/decades is probably the most credible possibility. (See Finnveden et al. 2022.)

  3. ^

    This is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!


No drama @ 2023-03-24T23:26 (+11)

First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).

Could you point more specifically to what progress you think has been made? As this research area seems to have only existed since 2021 we can't have yet made successful predictions about future values so I'm curious what has been achieved.

Jim Buhler @ 2023-03-25T09:13 (+6)

Yeah so Danaher (2021) coined the term axiological futurism, but research on this topic has existed long before that. For instance, I find those two pieces particularly insightful:

They explore how compassionate values might be selected against because of evolutionary pressures, and be replaced by values more competitive for, e.g., space colonization races. In The Age of Em, Robin Hanson forecasts what would happen if whole brain emulation comes before de novo AGI, and arrives at similar conclusions.

I don't think we can say they made "successful predictions" and settled the debate, but it seems like they came up with quite important considerations.

I intend to elaborate more on this kind of work in future posts within this sequence. :)

Sarah Weiler @ 2023-03-25T05:45 (+5)

Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.

I'm strongly drawn to that response. I remain so after reading this initial post, but am glad that you, by writing this sequence, are offering the opportunity for someone like me to engage with the arguments/ideas a bit more! Looking forward to upcoming installments!

Wrote this on my phone and wasn't offered the option to format the paragraph as a quote (and I don't know what the command is); might come back to edit and fix it later

Lorenzo Buonanno @ 2023-03-25T09:27 (+1)

Wrote this on my phone and wasn't offered the option to format the paragraph as a quote (and I don't know what the command is); might come back to edit and fix it later

You can try "> paragraph"

Jim Buhler @ 2023-03-25T09:21 (+1)

Thanks Sarah, that's motivating! 

JP Addison @ 2023-03-30T01:11 (+2)

I love your writing style here, and am very excited for future posts in this sequence.

:two-cents: I would make it more clear that this is the start of a sequence, so that readers will be able to more easily figure out why there's no linked paper.

Jim Buhler @ 2023-03-30T13:20 (+1)

Thanks a lot :)

Oscar Delaney @ 2023-03-25T02:45 (+1)

I see some parallel between this project of predicting future (hopefully wiser and better-informed) values for moral antirealists and just doing moral philosophy to work out facts of the matter in ethics for moral realists. Both projects seem pretty hard. I expectantly await future posts!

Jim Buhler @ 2023-03-25T09:39 (+7)

Thanks Oscar!

predicting future (hopefully wiser and better-informed) values for moral antirealists

Any reason to believe moral realists would be less interested in this empirical work? You seem to assume the goal is to update our values based on those of future people. While this can be a motivation (this is among those of Danaher 2021), we might also worry -- independently from whether we are moral realists or antirealists -- that the expected future evolution of values doesn't point towards something wiser and better-informed (since that's not what evolution is "optimizing" for; relevant examples in this comment), and want to change this trajectory.

Anticipating what could happen seems instrumentally useful for anyone who has long-term goals, no matter their take on meta-ethics, right?

Oscar Delaney @ 2023-03-25T10:57 (+4)

Ah OK, yes that seems right. I think the main context I have considered the values of future people previously is in trying to frontrun moral progress and get closer to the truth (if it exists) sooner than others, so that is where my mind most naturally went. But yes, if for instance, we were more in a Moloch style world where value was slowly disappearing in favour of ruthless efficiency then indeed that is good to know before it has happened so we can try to stop it.