What do we want the world to look like in 10 years?
By Owen Cotton-Barratt @ 2022-04-20T14:34 (+126)
We have a lot of uncertainty over the world we'll see in a decade. There are many different dimensions it might vary on. Some of these are likely much more important than others for ensuring a good future. Which ones?
I'm keen to see more discussion of this in longtermist EA circles. I think we have a lot of analysis of very long-term outcomes we're aiming for ("ensure AI is aligned"; "embark on a long reflection"), and a lot of discussion of immediate plans we're considering, and relatively little of what's good on this intermediate timescale. But I think it's really important to understand for informing more immediate plans. It isn't just enough to identify a handful of top priority variables (like "quality-adjusted amount of AI alignment research"), because comparative advantage can vary between people, and sometimes there are high-leverage opportunities available for achieving various ends, and it's helpful to understand how good those are.
I've been getting mileage out of this prompt as an exercise for groups over the last year. [1] I think people don't need to have a full strategic understanding to fruitfully engage with the question, but that answers aren't simple even for people who've thought about it a lot. Discussion of detailed cases often seems productive (at least I find it productive to think about individual cases, and I've liked conversations I've observed others having).
Examples of desirable states[2]
- Differential technological development is a major principle used in the allocation of public research spending
- We have ways of monitoring and responding to early-stage potential pandemic pathogens that make civilization-ending pandemics almost impossible
- There is a variety of deeply inspiring art, and a sense of hope for the future among society broadly
- There is a textbook on the AI alignment problem which crisply sets out the problem in unambiguous technical terms, and is an easy on-ramp for strong technical researchers, while capturing the heart of what's important and difficult about it
- Society has found healthier models of relating to social media, in which it is less addictive and doesn't amplify crazy but clickbait-y views
- There are more robust fact-checking institutions, and better language for discussing the provenance of beliefs in use among cultural and intellectual elites
- We have better models for avoiding rent-seeking behaviour
- Tensions between great powers are low
These examples would be even better if they were more concrete/precise (such that it would be unambiguous on waking up in 10 years whether they had been achieved or not), but often the the slightly fuzzy forms will be more achievable as a starting point.
This is a short list of examples; in practice when I've run this exercise for long enough people have had hundreds of ideas. (Of course some of the ideas are much better than others.)
Consider spending time on this question
I wanted to share the prompt as an invitation to others to spend time on, either by themselves or as a prompt for a group exercise. I've liked coming back to this multiple times, and I expect I'll continue doing that.
This question is close to cause prioritization. I think of it as complementary rather than a replacement. Reasons to include this in the mix of things to think about:
- The cause prioritization frame nudges towards identifying a single best thing and stopping, but I think it's often helpful in practice to have thoughts on a suite of different things
- e.g. for noticing particularly high-leverage opportunities
- Immediate plans for making progress on large causes must factor through effects in the intermediate future; it can be helpful to look directly at what we're aiming for on those timescales
- It naturally encourages going concrete
- One can do this in cause prioritization
- e.g. replace the cause of "align AI" with the sub-cause of "ensure AI safety is taken seriously as an issue among researchers at major AI labs"
- In practice I think causes are often left relatively broad and abstract rather than having lots of arguments about the relative priority of sub-causes
- One can do this in cause prioritization
- Concreteness of targets can help to generate ideas for how to achieve those targets
Meta: I suggest using comments on this post to discuss the meta-level question of whether this is a helpful question, how to run exercises on it, etc. Object-level discussion of which particular things make good goals on this timescale could be discussed in separate posts — e.g. perhaps we might have posts which consider particular 10-year goals, and then analyse how good they would be to achieve and what it would take to achieve them.
- ^
Originally in a short series of workshops I ran with Becca Kagan and Damon Binder. We had some more complex writeups that maybe we'll get to publishing some day, but after noticing that I'd become stuck on how to finish polishing those I thought I should share this simple central component.
- ^
These examples are excerpted from a longer list of answers to a brainstorm activity at a recent workshop I was involved in running.
Stefan_Schubert @ 2022-04-20T21:45 (+20)
I think we have a lot of analysis of very long-term outcomes we're aiming for ("ensure AI is aligned"; "embark on a long reflection"), and a lot of discussion of immediate plans we're considering, and relatively little of what's good on this intermediate timescale. But I think it's really important to understand for informing more immediate plans.
This reminded me of a passage from Bostrom's The Future of Humanity:
Predictability does not necessarily fall off with temporal distance. It may be highly unpredictable where a traveler will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination. The very long-term future of humanity may be relatively easy to predict, being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology).
It's possible that it in some ways is easier to predict immediate and longer-term outcomes, and that that's part of the reason we have more analysis regarding them than about intermediate scenarios.
However, I still agree that we should do more analysis of those intermediate scenarios.
rodeo_flagellum @ 2022-04-21T14:21 (+3)
Thank you for writing this post and for including those examples.
To address the first part of your "Meta" comment at the bottom of the post, I think that, were I to do this exercise with my peers, it would not cost much time or energy, but could potentially generate ideas for desirable states of humanity's future that might result in some of my or my peers' attention temporarily being reallocated to a different cause. This reallocation might take the form of some additional querying on the Internet that might not have otherwise occurred, or might take the form of a full week's or month's work being redirected towards learning about some new topic, perhaps leading to some dissemination of the findings in writing. So, doing the exercise, or some similar version of it, you've described above seems minimally valuable enough to give it a try, and perhaps even experiment it with.
In terms of exercise formats, if I were to implement "sessions to generate desirable states of humanity for the 5yr, 10yr, 25yr, etc... future", I would probably get together with my peers each month, have everyone generate ~10 ideas, then pool the ideas in a Google Doc, and then together, prune duplicates, combine similar ideas, come up with ways to make the ideas more concrete, and resolve any conflicting ideas. If I am not able to get my peers together on a monthly basis, I would probably do something similar to what I have described, and then perhaps post the ideas in a shortform.
In my own work, I already do this to a degree; usually I have a list of things to write or learn about, and add a subjective (x% | y% | z%) rating, where x means how motivated I am to do it, y means how valuable I think work on this topic is, and z means how difficult it would be for me to work on it, to each project idea. To supplement exercises in generating descriptions of desirable states for humanity in the coming years, it would probably be easy enough to add some quick subjective estimate of importance to each idea when it's generated. Also, a mechanism for generating desirable states for humanity could be looking at macroscopic issues for humanity, (off the top of my head, not ordered in terms of importance) - {Aging, That humans war, Aligning AI, Human coordination failures, Injury and disease, Wellbeing, Earth Isn't Safe (natural risks, including anthr. climate change), Resource distribution, Energy and resource supplies, Biorisks} - and then coming up with ideas for "what civilization would look like if this issue were addressed", or something similar.
Lizka @ 2022-06-08T11:16 (+2)
The finalists from the Future of Life Institute's Worldbuilding Contest have produced some interesting additions to this topic.