Foresight Institute launches two possible future scenarios with AI

By elteerkers @ 2025-08-19T14:29 (+33)

With contributions from Vitalik Buterin, Anthony Aguirre, Allison Duettmann, Deger Turan, Emilia Javorsky, and more

What kinds of futures are possible if we steer AI in different directions, and what would it actually take to get there?

Today, Foresight Institute’s Existential Hope program is launching AI Pathways, the result of months of work across two in-depth scenario reports designed to open up the meme space of what different AI futures could look like. Rather than prescribing a single preferred outcome, the reports explore two different plausible futures, shaped by the choices we make and the systems we choose to build.

The scenarios

We developed two in-depth trajectories, each mapped out with timelines, enablers, tensions, and trade-offs. They were chosen to explore two futures that are often mentioned hypothetically, but rarely visualized in practical detail:

The Tool AI Pathway
A future shaped by powerful but controllable AI systems with limited agency. This scenario explores the idea that many benefits often associated with AGI could instead be achieved through advanced, tool-like systems. It asks: what if we focus on scaling such systems, designed to assist rather than act autonomously, in ways that are both safe and effective?

The d/acc Pathway
A future shaped by decentralized, democratic, and defensive acceleration, where coordination technologies drive progress across science, governance, and infrastructure. This scenario builds on growing interest in bottom-up and resilience-focused approaches to technological acceleration. While the concept is gaining traction, it has often remained abstract, especially given its intentionally plural nature. Here, we aim to make it concrete: what might d/acc look like in practice?

You can read both scenarios here: https://ai-pathways.existentialhope.com/ 

Why we’re doing this

Much of today’s discussion around AI futures tends to focus on a few high-profile trajectories, often involving AGI, short timelines, or centralized control. But those aren’t the only possibilities.

We chose these two scenarios because they represent directions that are often mentioned, but rarely explored in detail:

Our hope is that by making these futures more concrete, we can help broaden the range of paths being considered, and support deeper reflection on which ones might be worth pursuing.

Metaculus integration

To invite deeper discussion around the scenarios, we’ve partnered with Metaculus to launch a set of forecasting questions based on key milestones in each future. Alongside this, we’re running a $5,000 Commenting Prize on Metaculus.

The prize will go to the top 8 contributors based on the quality of their comments on the AI Pathways questions.

How they were created

Each scenario is designed to be plausible given specific conditions. The goal is to make these futures more tangible and discussable, while leaving room for critique and iteration.

Both reports are written by Linda Petrini and Beatrice Erkers. They were developed through expert interviews and multiple rounds of feedback on the written reports. The scenarios reflect a synthesis of many perspectives, and they shouldn’t be taken as endorsements or official positions of any individual listed below.

Contributors (interview and feedback participants)

d/acc Pathway:
Vitalik Buterin (Ethereum), Glen Weyl (Microsoft Research, RadicalXchange), Kevin Owocki (Gitcoin), Andrew Trask (OpenMined, DeepMind), Emilia Javorsky (Future of Life Institute), Deger Turan (Metaculus), Allison Duettmann (Foresight Institute), Soham Sankaran (PopVax), Christine Peterson (Foresight Institute), Marcin Jakubowski (Open Source Ecology), Naomi Brockwell (Ludlow Institute), Molly Mackinlay (Protocol Labs), Lou de Kerhuelvez (Nodes).

Tool AI Pathway:
Adam Marblestone (Convergent Research), Anton Korinek (University of Virginia), Anthony Aguirre (Metaculus, Future of Life Institute), Saffron Huang (Anthropic), Joel Leibo (DeepMind), Rif A. Saurous (Google), Cecilia Tilli (Cooperative AI Foundation), Ben Reinhardt (Speculative Technologies), Bradley Love (Los Alamos National Laboratory), Konrad Kording (University of Pennsylvania), Jeremy Barton (Nano Dynamics Institute), Owen Cotton-Barratt (Researcher), Kristian Rönn (Lucid Computing).

We’re deeply grateful to anyone who contributed their time and insights to this experiment. 

How you can engage:

We’re also publishing follow-up content over the coming weeks, podcast episodes, events, and more scenario materials, and we’d love to collaborate or cross-post where useful.


Denkenberger🔸 @ 2025-08-20T07:54 (+4)

I think it makes a lot of sense to examine alternate scenarios. Commenting on tool AI:

Nearly every expert interviewed for this project preferred this kind of "Tool AI" future, at least
for the near term

This is very interesting, because banning AI agents had little support on my LessWrong survey and there was only one vote for it out of 39 on the EA forum survey I ran. To be fair, this implies banning forever, so if it were temporary, there might be more support.

Capital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital.

I think this is very important because people often point out that humans will not have influence/income if they don't have a labor wage, but they could still have influence/income through ownership of capital.

You mention how poverty would still be a problem. However, I think if AI starts to automate knowledge work, the increased demand for physical jobs should lift most people out of poverty (at least until robots fill nearly all those jobs).
 

elteerkers @ 2025-08-20T20:29 (+1)

Yeah I think my sense was definitely that people saw Tool AI as a great solution, but mostly interim. If we had phrased it as being “locked in forever,” the reactions might have looked very different? I've interpreted it more as that people seem to see it as preserving option value: we can still develop AGI later, but ideally after we’ve managed to integrate Tool AI into society, and set up systems to handle AGI better than if it came now when we're quite poorly prepared.

Really appreciate your points on capital dividend funds and the distributional side as well. If you’re up for it, would love if you shared these thoughts on the Metaculus tournament too, where we're running a comment prize exactly to surface perspectives like this: https://www.metaculus.com/tournament/foresight-ai-pathways/ :)