AI as a Constitutional Moment

By atb @ 2025-05-28T15:40 (+17)

My aim here is to communicate a worldview, not to defend it. I find this worldview provides a helpful lens through which to view the impacts of AI, but I'll leave it to my reader to determine whether they too find it helpful.

1        Aliens and Constitutions

On an alternative Earth, the aliens land in 2025. Or rather, they crash. There are ten million of them, refugees fleeing a distant planet. They have no way to leave Earth now they have arrived.

The people of this alternative Earth face a challenge. The humans and aliens are somewhat alike; coexistence is possible. Yet the humans and aliens are also radically different; their needs differ; their values differ; their capabilities are not the same. Together, the humans and aliens must construct a flourishing civilisation in which they can live together.

The people of the alternative Earth face a constitutional moment: a moment in which the fundamental structures of society need to be radically renegotiated to navigate a changed world.

2        Normative Construction

Humans are constitutional creatures.[1] We're constantly engaged in reflection on how we can live well as a community, and there are always some who challenge the very foundations on which our communities rest. We engage in such reflections when it comes to communities of friends, of colleagues, and of citizens, to choose just a few examples. This is politics-as-normal. Against this background hum, from time to time, we find ourselves facing a constitutional moment, in which more radical renegotiation is needed.[2]

Sometimes, these moments are explicitly recognised as such. Think of the United States and its constitutional conventions. At other times, it might be that the moment goes unrecognised but occurs even so. Perhaps this was the case for the first sedentary settlements, when people faced new questions about communal life.

In these constitutional moments, we renegotiate the normative structures of society. These structures include the implicit and explicit norms that guide us both in acting and in evaluating actions. For example, norms guide how we ought to treat those we love, those we work with, and those we share a nation with. Normative structures also include the formal institutions, like legal and political institutions, that shape our society. And these structures include our very notions of individual and societal flourishing, which specify what it looks like to live well together. These too are up for reflection, challenge, and renegotiation.

A constitutional moment is not a mere epistemic challenge: it's not simply about learning the true nature of flourishing and learning what best promotes this in a changed world. Instead, a constitutional moment is a deliberative challenge: together, we need to engage in a process of reflection, discussion, and ultimately creation of new frameworks for society. Normative structures are constructed rather than merely discovered.

This raises an obvious question: isn't morality fundamentally what matters and isn't there a truth to be discovered about morality?

Six days of seven, I'm a moral anti-realist: I don't believe there's any interesting sense in which normative truths are written into the universe.[3] On these days, my primary normative framework is political rather than ethical. Instead of asking what normative theory is true, I ask the practical question of how we can build a society in which we live well together. "Live well" not in some objective sense that is granted to us but in a sense that we ourselves construct. For six days a week, I believe we must determine how to live well by the only lights we have (our own), and we must do so despite enduring disagreement about the very notion of flourishing (more on disagreement shortly).

What about the seventh day, when I feel the force of moral realism? Even on this day, construction seems to me important. For a start, morality might not lay out a full shape for society, and where it falls silent, we face the practical question again of which of the permitted ways to live together. Here, we face a challenge of coordination that can be addressed in deliberation. Further, and in any case, we must engage in a constructive process because this is what makes the values our own. We should not aspire to have moral values imposed upon humanity by external force, even if we might have reached these very values in a suitable process of deliberation (had we undertaken it). Such imposition would itself be a cause of suffering and of alienation from society. Instead of imposition, we should want humanity to construct values in accordance with morality, so that these values and the norms that support them, live in the minds of those who enact and are subject to them.[4]

Seven days of seven, I believe a constitutional moment requires construction and not mere discovery.[5]

3        Clarifying Construction

There are two things to clarify about construction of normative structures.

Constraints

If realism is true then morality might constrain both how the process of construction should proceed and what endpoint should be reached. Yet even on an anti-realist view, there are constraints that limit how normative structures should be constructed.

In part, this is because our current structures can themselves constrain the process of renegotiation.[6] Indeed, not only can this be the case, but it almost inevitably will be: normative structures are likely to have a great deal to say about the appropriate shape of societal deliberation and debate. They're also likely to treat some parts of the structure as fixed, or something close to this.

Further, the process of normative construction is directed: the goal is to figure out how to live well together, in a way that allows both individual and societal flourishing. While the nature of flourishing is itself up for renegotiation along with everything else, it remains possible to miss the target we set ourselves in our normative construction. In this case, the process of construction has failed and further renegotiation will be needed.

So when the constructive process does succeed, the structures are, in an important sense, self-fulfilling: in the process of creating them, we also come to endorse them. However, this does not imply that success is automatic or easy. Even on an anti-realist view, we are constrained both in terms of how we can renegotiate normative structures and in terms of what endpoints can be permissibly reached.

This is my first clarification.

Difference

The challenge of normative construction—during politics-as-normal or in constitutional moments—is to figure out how to live well together despite disagreement and difference. So it should be no surprise that in constructing normative structures, we should not aspire to create homogeneity.[7] Instead, we should construct structures that allow for a society of mutual flourishing given the existence of difference.

In part, this is because difference is unavoidable in a world where humans and AI co-exist, especially if we rule out dystopic measures to control the makeup of the human population. So we must plan for difference, and difference might need to be reflected in the structures (equal consideration doesn't imply identical treatment). Further, difference isn't simply inevitable. It's also wonderful. A society of complete uniformity is a society that has radically failed to flourish. If we negotiate our way into normative structures that ensure homogeneity then we have constructed a path away from societal flourishing.[8]

But isn't the nature of flourishing itself up for renegotiation such that we could negotiate our way to homogeneity-as-flourishing? Even if this were so, we would need to actually engage in such renegotiation for it to change the landscape here; unless we do so, we continue to operate within structures that value difference. In any case, above I noted that our normative structures themselves constrain what renegotiations are permitted, and a respect for difference might well be something that we're required to hold fixed. Even if not, then for me at least, it would be desirable to negotiate our way towards such fixity.[9]

But what about abhorrent worldviews that value slavery or genocide or repression of others? Is it really good to respect all difference? No, it's not. For a start, even if difference is valuable, it's surely not the only valuable thing and we must trade off the promotion of difference against costs to autonomy and wellbeing and other things besides. At least sometimes, the value of difference won't win out.

But we needn't concede even this much. In negotiating normative structures, the aim isn't to develop a simple normative theory that we could scrawl on a napkin while sitting in a bar. Instead, normative structures are nuanced and rich. We can value some forms of difference without valuing all difference. We can place constraints on reasonable values and then celebrate variation within these constraints.

This is my second clarification. I now return to the main thread.

4        AIs and Constitutions

On an alternative Earth some alternative Earthlings face a constitutional moment. On our Earth too, we face such a moment. Aliens have not landed, but AI has. Even now, we are beginning to share the world with a variety of artificial cognitive processes that are unlike our own. Further changes are coming.[10]

What makes this a constitutional moment? AI presents such a moment because it could bring about changes that are both radical and rapid.

As to radicalness, we need merely note that AI will have impacts across a broad range of domains. It will impact employment, health care, education, and war, among other things. Such breadth of impact will mean a substantial shift in society, which will require a substantial change in the normative structures that underpin it.

Further, AIs will come to possess, at some time or another, robust, independent agency: they will act flexibly, over long time horizons, in messy real world environments, in pursuit of complex goals. The need for society to accommodate a new class of agents will require that we radically reimagine our normative structures.[11] How should we relate to these agents? What laws, if any, should we apply to them? How do AI agents change the relationship between humans?

More speculatively, AI might at some point develop consciousness, or otherwise possess the grounds for moral status. The need to expand our moral community to encompass an alien class of being would be a radical moment of renegotiation indeed. What voting rights should such beings have? May we create AIs with moral status who exist to serve our whims? What constitutes wellbeing for artificial agents?

So, one reason AI drives a constitutional moment: radicalness.

As to rapidity, there are reasons to think that AI could diffuse into the world unusually quickly by the standards of technological innovations. For a start, the infrastructure that underpins AI, primarily data centers, requires only localised changes to the physical world. It does not require, for example, that each company construct new factories before utilising AI (as was required in the move from steam to electricity). So we might expect diffusion to be faster than it would be were it necessary to wait for each company to engage in large-scale infrastructure projects.

Further, for AI to have impacts, it doesn't need to be adopted by lumbering bureaucracies that are constrained by onerous regulations. Instead, impacts can be felt when individuals independently choose to make use of AI. For example, AI is already having impacts in university education because students and faculty are making use of it, regardless of whether it has been integrated into broader university structures.

There's also the more dramatic possibility that as AI becomes capable at assisting with AI R&D, we could see a speedup in AI progress. At its most extreme, this could lead to an intelligence explosion, a period during which we see rapid improvement in AI capabilities as AI helps us to develop better AI, which helps us to develop even better AI, and so on. If this happens, AI's impacts could occur with a rapidity that is unprecedented in human history.

So, two reasons AI drives a constitutional moment: radicalness and rapidity.

In addition to these high-level considerations, there's also a concrete reason to think AI could undermine our current social contract: it could undermine distributed power structures. As things stand, the decisions that shape society are dispersed across many people, because millions of people are needed to interpret and implement decisions from the top. People are also empowered because their labour underpins a nation's economic and military might. Of course, this doesn't stop wild abuses of power and domination of individuals. Nevertheless, there is empowerment here and some constraint is placed on the actions of the powerful.

However, AI might reduce reliance on human labour for military and economic might, as many tasks now completed by humans become automated away and as militaries increasingly rely on autonomous or semi-autonomous systems.[12] And AI might reduce the need for a thousand tiny human decisions to be made to implement orders from the top. Overall, AI could undermine distributed power, concentrating power in the hands of those with control of AI itself.

If this impact were dramatic enough, humanity as a whole could experience gradual disempowerment and we could find ourselves facing an intelligence curse. Even if these more extreme outcomes were avoided, a meaningful degradation of historic grounds of distributed power would require renegotiation of normative structures in order to support widespread flourishing.

So, three reasons AI drives a constitutional moment: radicalness, rapidity, and degradation of distributed power.

Overall, I'm not suggesting that the frame of AI-as-constitutional-moment is the only frame that can be fruitfully used to reflect on AI. But I am suggesting it's one fruitful frame. When we think about AI, we should at least sometimes think about AI as a constitutional moment.[13]

Reading List

I wrote this post without reading much background material, both because I wanted to get my own views down before they were influenced by the thoughts of others and because I lacked the time to do a more thorough job. Nevertheless, much has been written on related topics, and people who have read drafts of this post have provided various reading recommendations. Here are the suggestions I've received thus far. I'd welcome other suggestions.

(There are also, of course, the various things that I linked in the text itself)

Acknowledgments & Non-Acknowledgments

For feedback and discussion on this post or others in the sequence, thanks to Rose Hadshar, Andreas Mogensen, Teru Thomas, Lizka Vaintrob, and participants at a Forethought research seminar. As to prior influences, I'm afraid I don't remember how this frame got into my head though I imagine many people deserve acknowledgement for whatever elements are plausible.

  1. ^

     See, for example, Rorty's Pragmatism as Anti-Authoritarianism. I prefer to think of humanity as constitutional, rather than political, beings, because I want to emphasise our eagerness to engage not just with everyday politics but also with more foundational questions about the communities we belong to.

  2. ^

     These constitutional moments are momentary only in a loose sense. They represent an uptick against a background trend, but they might still unfold over years, decades, or even centuries. These constitutional moments are often precipitated by external changes, like new technologies or encounters with a new civilisation, though this need not be the case.

    I use the word "constitutional" to emphasise the fact that in these moments we're renegotiating deep foundations of society, rather than merely skimming the sociopolitical surface. Unfortunately, other connotations of this word are misleading, insofar as it pushes us to think about explicit documents, written by a political elite, which are intended to be resilient against change. In contrast, constitutional moments might be implicit rather than explicit, distributed rather than centralised, and at least some of the normative structures negotiated in these moments will later continue to evolve.

  3. ^

     This is not to say that morality is unimportant. I deeply value compassion, autonomy, honesty, human rights, and the flourishing of others. I condemn cruelty and mourn indifference. Morality’s meaning runs deep. I simply believe that we create this deep meaning ourselves, rather than it being written into the stars or the spaces between. Morality is, in my view, a human cultural construct, but one to which I'm deeply committed.

    Often, little rests on whether this is right. What I call successful moral construction another might call moral discovery, but labels aside, the processes might look much the same. However, my anti-realism has various impacts on my views. (1) It makes me more concerned about the possibility of society as a whole outsourcing moral deliberation (say, by setting AI the task of discovering the moral truth and then deferring to whatever is discovered), as this would bypass the crucial constructive process by which the values come into being. (2) It makes me more (though far from infinitely) pluralist about values, because there's no unitary truth that all should aspire to. (3) It makes me more concerned about the imposition of values that are alien to our own (as, say, might happen if we tried to align AI with some moral theory and then set it to the task of optimising the world by the lights of that theory). If moral realism were true then we might be very mistaken about morality and so might have to accept a world that looks abhorrent by the light of our values, but such a perspective is far less compelling according to anti-realism. (4) It leads me to focus more on the constructive process itself (relative to the bottom-line values) than I otherwise would.

    It's worth noting that one could accept these claims as a realist or reject them as an anti-realist. Still, given my own views about realism and anti-realism, my anti-realist views push me in the stated direction. I expect that adopting anti-realism might have similar impacts on at least some others.

  4. ^

     Exactly how much imposition matters depends on the nature of the moral truth. On my moral-realist days, I suspect that morality severely constrains the permissibility of imposing complete value systems on civilisation (but permits less extreme impositions, especially those that protect and promote core needs). However, things might look rather different on the moral theories that some readers find most compelling.

  5. ^

     Partly for this reason, when people talk about "AI for philosophy" I feel unexcited by an interpretation that involves AI simply going out and discovering the normative truth. I feel more excited by an interpretation that involves AI playing the role of the academic philosopher: doing research, yes, but also helping "students" to reflect and deliberate on their moral views both individually and as a group. From my perspective, the desirable form of "AI for philosophy" is continuous with "AI for deliberation" and "AI for cooperation".

    That said, in negotiating normative structures, we often decide to defer to others (eg. experts), both with respect to determining what's true in some domains and with respect to making some decisions that shape society. Along similar lines, we could negotiate normative structures that allow deferral to AI in some cases, and then AI making discoveries or decisions needn't involve any sort of problematic imposition on the constructive view.

  6. ^

     One might, of course, challenge these aspects of our current structures. Often this challenge might itself be allowed by the structures, especially because one element of existing structures can often be used to challenge a different element. That said, structures might sometimes more deeply constrain what renegotiation is possible. In that case, no internal challenge to certain components of the structure will be possible.

  7. ^

     A certain amount of consensus might be required for it to make sense to talk of society having a single (though perhaps pluralistic) normative structure. If there's a sufficient breakdown in consensus then society might instead be said to have a set of interacting normative structures. If things break down still further, there might be said to be no normative structure at all (and plausibly, at that point, no society either).

  8. ^

     One could deny this in various ways. For example, one might think that as different beings deliberate carefully about matters, they will converge on a single set of values (so difference isn't inevitable). Or one might think that the moral truth sets a standard that we should aspire to homogenous uptake of (so difference isn't wonderful). I'm sceptical of the former line; this seems to me to be a strong, and implausible, empirical claim (though I think the possibility calls for deeper discussion than I can hope to engage in here). I'm sceptical of the latter line because, as already noted, I tend towards accepting anti-realism. In any case, for those more tempted by these lines, I think it's still plausible that we should value difference for some time to come, during the period before convergence and while the moral truth is still unknown.

  9. ^

     In practice, I think our normative discussions are often both assertions of beliefs about the nature of current structures and attempts to maintain or renegotiate these structures. In this domain, I don't think the epistemic and the deliberative can be neatly separated. So when I endorse the value of difference, I take myself to be both asserting something about current structures and contributing to deliberation about the evolution of those structures.

  10. ^

     I'm more conservative on my timelines than you might expect, given the nature of this post. For example, I'd be extremely surprised if AI could, by 2030, carry out every intellectual task as competently as an expert human. That is, I'd be surprised if we had AGI by this year.

    Still, I think this unlikely possibility is worth preparing for even so, given what would be at stake if it came about. Furthermore, if we have longer than this to prepare for AGI then I think the appropriate reaction is gratitude; it's 2025 now and five years to prepare is surely not enough. So I don't think we should squander a longer timeline by refusing to consider the matter until the crisis is nearly upon us. Finally, AI can present a constitutional moment in the coming years and decades even in the absence of AGI. The case for preparing for AI as a constitutional moment is overdetermined.

  11. ^

     In The Leviathan, Hobbes's account of the state is built on the foundations of human similarity in capabilities (see chapter 13). I think this reflects something important: to this point, humanity has developed normative structures in a context where most members of society have relatively similar capabilities. It follows that, as AI outstrips human capabilities in evermore domains, we will face a novel challenge that's unlikely to be well addressed by current structures.

  12. ^

     It's important to not oversimplify here. For example, if AI automates some tasks in a way that complements humans, this might increase the economic value of human labour and hence increase the power of human workers. Still, it would be a mistake to assume that automation must occur in a way that consistently empowers humans and the disempowering paths must be taken seriously.

  13. ^

     The framing of AI as a constitutional moment might seem to suggest that we need to decide, all at once, how to integrate AI with society. Contra this, I expect it will actually take years, decades, or longer to navigate this acute constitutional moment (and even then our structures will continue to develop with a frame of politics-as-normal). What we'll need to do quickly is act so as to enable this ongoing deliberation to proceed fruitfully.


SummaryBot @ 2025-05-28T21:29 (+1)

Executive summary: This exploratory post argues that AI’s transformative potential constitutes a “constitutional moment”—a period demanding radical renegotiation of the normative structures that shape society—emphasizing that such renegotiation should be a constructive, pluralist, and inclusive process rather than a mere discovery of moral truths.

Key points:

  1. AI as a constitutional moment: The author likens the advent of AI to the arrival of aliens—an event so radical and rapid in its societal implications that it requires rethinking foundational societal norms and structures.
  2. Normative structures are constructed, not discovered: Drawing from a largely anti-realist stance, the post asserts that moral and political norms emerge from deliberative processes rather than objective truths, and should reflect the values of those who live under them.
  3. Constraints and pluralism in construction: Even within a constructivist view, normative renegotiation is constrained by existing structures and the goal of promoting human and societal flourishing; successful renegotiation must accommodate enduring differences without endorsing all forms of variation.
  4. AI disrupts distributed power: As AI increasingly replaces human labor and decision-making, it could erode the historical distribution of power among people, potentially leading to widespread disempowerment if new structures aren't thoughtfully built.
  5. Urgency without finality: Although this constitutional moment may unfold over decades, early decisions are critical to enabling inclusive and thoughtful long-term deliberation.
  6. The author’s intent: Rather than defend the worldview presented, the author aims to offer it as a lens for understanding AI’s societal impact, hoping others find it generative or at least worthy of reflection.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.