[Linkpost] Should we make grand deals about post-AGI outcomes?
By Forethought, finm @ 2026-03-13T21:13 (+18)
This is a linkpost to https://www.forethought.org/research/should-we-lock-in-post-agi-agreements-under-uncertainty
A widely-held view says we should avoid locking in consequential decisions before an intelligence explosion — we’ll understand more if we wait, and we’ll have time to reflect on our decisions.
But that view might be missing something: some mutually beneficial deals depend on uncertainty about the future. Once the uncertainty resolves, the window closes on potentially big ex ante gains. We make them early, or never.
The classic example is insurance: while your house hasn’t been struck by lightning, you and your insurer can improve each other’s prospects. But once your house gets struck by lightning, it’s too late to make a deal. You can think of this as a trade between possible outcomes, where the opportunity for trade depends on both outcomes being live possibilities.
I consider three kinds of agreement that fit this pattern, each hinging on a different kind of uncertainty about what comes after an intelligence explosion.
The first is uncertainty about the relative share of resources — who ends up on top without a deal. While major powers like the US and China remain uncertain about who might otherwise achieve a decisive strategic advantage, both should prefer to commit to sharing (some) future power or resources, over the straight gamble. Moreover, the expected surplus from a power-sharing deal shrinks over time, so in theory both sides should prefer to make a deal as soon as it’s possible.
The second is uncertainty about the overall ‘stakes’, like how resource-wealthy society becomes overall. Here, a less risk-averse party can effectively insure a more risk-averse one: taking on more variance in exchange for higher expected resources, and improving both their prospects. Or the stakes in question could be about something more specific, like how philanthropic actors today ‘mission hedge’ by holding positions in specific companies which pay off when their cause is most urgent.
The third kind of agreement involves theoretical and especially normative uncertainty. If one party cares much more about having resources in worlds where, say, a particular moral view turns out to be correct, they can trade for more influence in those worlds. Advanced AI could make such deals feasible by acting as a mutually trusted arbiter for questions that are otherwise hard to resolve.
The basic case for enabling all these agreements is the same basic case for any voluntary commitment: all parties improve their prospects by their own lights, and nobody else is hurt. Moreover, agreements between major powers to share resources could make the future meaningfully more pluralistic and morally diverse, which seems better under moral uncertainty than a more unipolar future. And agreements between individuals could give more influence to those who staked their wealth today on future outcomes as a credible show of their beliefs or values, and were vindicated.
It looks like many of these deals won’t be possible by default. If future resources are distributed rather than auctioned, then most of our future wealth arrives as a windfall, but contracts over future income typically aren’t enforceable under common law. We might instead form agreements over future influence, but that too is legally murky. So some agreements would have to rely on private alternatives to legal contracting, through AI-enabled arbitration and enforcement. We might also consider encouraging commitments from private institutions to honour small-scale deals, or setting up infrastructure for trading on post-AGI outcomes. Zooming out to deals between major powers, we’ll need more developed diplomatic frameworks for resource-sharing treaties, likely involving AI-enabled monitoring and enforcement.
Again, each of these deals has to be made early, or never. And that also makes downsides look fairly scary. Enabling early deals lets people commit to hugely consequential terms before they’re wise enough — especially in a world where you can’t recover wealth through labour income. So if we do proactively enable these agreements, I think we should add in some serious guardrails: requirements for demonstrated understanding, caps on the fraction of future resources that can be staked, and mechanisms for voiding deals that were clearly misconceived at the time.
The dawn of the intelligence explosion may be the last period of shared ignorance about some crucial and long-lasting outcomes. Deals struck under that ignorance tend to distribute resources in ways that reflect mutual benefit rather than bargaining power. Once the veil of ignorance lifts, that changes. The case for enabling at least some early deals — despite the received wisdom against “locking-in” the future where we can help it — is fairly compelling.
You can read the full paper here: Should We Lock in Post-AGI Agreements Under Uncertainty?
OscarD🔸 @ 2026-03-15T13:51 (+16)
I think the type of early deal that would be most valuable is where the US and China both agree to produce a joint 'consensus' ASI aligned to 'the good'. In more detail:
- The US and China, as you note, are unsure who will win, and would be better off making a deal to preserve some minimum amount of future influence. But I think I am more worried than you about the costs of continued multipolarity into space colonisation. You write “Even having two alternative systems might open up the possibility for comparison, healthy competition, and moral trade.” War, threats, and unhealthy (e.g., burning the cosmic commons) competition also seem like important possibilities here.
- Instead, I think having a joint superintelligence that coordinates using our cosmic endowment would be better, with some amount of influence within the 'moral parliament' of the ASI for each of the US and China.
- Just that would be preferable to dividing up the universe into two camps I think - it is easier to do moral trades within one agent acting under moral uncertainty than coordinating between two agents.
- A better version, though, could involve the US and China agreeing on some core moral precepts, or just a moral reflection process, and then jointly designing a moral curriculum for the proto-ASI including plenty of Western and Chinese texts, and letting the ASI do as it sees fit. Presumably both sides genuinely believe they are right and that an appropriate moral training process for the AI will lead to liberalism/Socialism with Chinese characteristics. So this exploits the two sides having different credences (where as you note your proposed deals are possible even if both sides have the same credences). This creates a larger surplus for posisble agreements.
- Of course, agreeing to create a joint ASI could also have big nearer term benefits, e.g. avoiding racing and slowing down AI progress and investing more in safety.
This proposal is clearly very far outside the overton window currently, but I don't think this is that much worse on feasibility than your proposed great power resource-sharing deals. It also solves the enforcement challenge as well which is convenient since we might have needed to create such a consensus AI to enforce a different sort of deal.
I am tentatively excited about this proposal, but I expect there isn't much to do to further it until the relevant parties are taking things more seriously.
Will Aldred @ 2026-03-15T16:26 (+4)
Nod. Plus, another advantage of your ‘consensus ASI’ approach—which is essentially a values handshake—over the deal types outlined by OP is that combined-US-China presents a unified front if and when third-party alien civilizations are encountered.
(A ‘unified front’ is an advantage if military power, and thus bargaining power, scales superlinearly in the deep future. Which seems >50% likely to me.)
MichaelDickens @ 2026-03-16T15:47 (+2)
If the US and China are in a state where they're willing to cooperate on ASI, I would much prefer that they agree not to build ASI (until there's a broad consensus that we know how to make it safely).
If they agree to that, and we do eventually figure out how to build aligned ASI, then it would be good to have a global agreement on what that ASI should do. But if we're going to do work today to work toward some sort of international cooperation on ASI, then the objective of that cooperation should be to not build ASI.
Owen Cotton-Barratt @ 2026-03-13T21:48 (+10)
You discuss the idea of clauses that allow for later escape from poorly-conceived deals as a guardrail. This feels like a powerful possibility which might add a significant amount of robustness.
But I'm wondering if the idea might be more broadly applicable than that. If we have the kind of machinery that allows us to add that kind of clause, maybe we could use it for the whole essence of the deal? Rather than specify up front what you wish to exchange, just specify the general principles of exchange -- and trust the smarter and wiser actors of the future to interpret it in a fair and benevolent manner.
In general I think reading this article I'm finding that I have some sympathy for the central claim that there could be useful deals to strike early (that it isn't possible to strike later); however I find myself feeling quite sceptical of the frameworks for thinking about different types of deals etc. -- I don't see why we shouldn think that we have done more here than scrape the surface of the universe of possibilities, and my best guess is that actually-wise deals would look quite different than anything you're outlining. Curious what you make of this -- does this feel too radically sceptical or something?
damc4 @ 2026-03-15T23:01 (+1)
I actually proposed a similar idea in a few places months ago (links below). I plan to post about related topics, e.g. my future posts might help to understand how such deals could be enforced, so if you are interested, then follow me
https://forum.effectivealtruism.org/posts/3rDfScbBNhbsk93gF/how-to-stop-inequality-from-growing