Longtermism and Global AI Governance: Building Institutional Readiness in the Global South
By Adebayo Mubarak @ 2025-10-01T23:54 (+13)
This is as an entry for the Essays on longtermism Competition.
Introduction
When I first encountered longtermism through What We Owe the Future and later in discussions within the EA community in Nigeria, what struck me most was not the distant timelines or abstract population ethics debates. It was the simple but profound claim that our actions today can meaningfully shape the trajectory of generations we will never meet. This insight felt especially resonant coming from a context where institutions often struggle to plan even a decade ahead, let alone a century.
The recently published Essays on Longtermism: Present Action for the Distant Future captures this challenge in a rigorous academic way. One chapter in particular — Jacob Barrett and Andreas T. Schmidt’s “Longtermist Political Philosophy: An Agenda for Future Research” asks us to consider what kinds of political and institutional structures are best suited to safeguard the long-term future. Their questions about legitimacy, representation, and feasibility are pressing, but they are also, in some ways, incomplete.
While much of the discussion of longtermist priorities is rooted in American and European institutions, there is a pressing need to expand this analysis to the Global South. Transformative technologies such as artificial intelligence (AI) will not only affect powerful states but will reshape societies everywhere. For longtermism to succeed in practice, its institutional grounding must be global, inclusive, and capable of addressing asymmetries of power in technology governance.
This essay responds to the chapter “Longtermist Political Philosophy: An Agenda for Future Research” by Jacob Barrett and Andreas T. Schmidt, extending their framework into the context of AI governance in the Global South. I argue that building institutional readiness for AI in low and middle-income countries is an essential component of longtermist action, one that has been comparatively underexplored but holds significant implications for humanity’s long-term trajectory.
Longtermism and the Challenge of AI Governance
Barrett and Schmidt note that longtermism demands serious engagement with institutional questions: what kinds of political structures are best suited to safeguard the long-term future? Their analysis highlights issues such as legitimacy, intergenerational representation, and feasibility. Applied to AI governance, these questions become even more urgent, as the rapid development of advanced AI could dramatically reshape global power structures and human welfare.
From a longtermist perspective, the stakes are unusually high. AI carries both extraordinary opportunities (scientific breakthroughs, economic growth, enhanced decision-making) and catastrophic risks (misalignment, misuse, disempowerment of human agency). The path we take now will reverberate through centuries, perhaps millennia. Thus, designing inclusive governance frameworks today is not merely desirable but morally required.
Why the Global South Matters
Much of the discourse on AI safety and governance takes place in advanced economies such as the U.S., the EU, and increasingly China. Yet the Global South, where the majority of humanity resides, faces unique vulnerabilities:
- Asymmetric impact: AI-driven global economic shifts may widen inequality, leaving many countries dependent on external actors. Even in developed economies, AI’s reach is nontrivial: the IMF notes that about 60 % of jobs in advanced economies may be ‘exposed’ to AI, meaning that at least some portion of their task mix could be affected.
- Institutional fragility: Weaker governance structures may make societies more susceptible to harmful uses of AI, from disinformation to authoritarian surveillance. Global automation may displace 400 million to 800 million people by 2030, under certain scenarios.
- Demographic significance: Africa and South Asia will be home to the majority of the world’s population in the future, meaning their long-term flourishing is central to the global longtermist project. By 2050, Africa’s population is projected to reach close to 2.5 billion, making up more than 25% of the world’s population.
Ignoring the Global South in AI governance frameworks risks entrenching a two-tier future: one where technological benefits accrue to a minority, while risks disproportionately affect the majority. Longtermism, properly understood, cannot allow this outcome.
Pathways to Institutional Readiness
How might longtermism inform practical action in building institutional readiness in the Global South? Here are three pathways I believe longtermists should take seriously:
- Capacity Building for AI Policy
Investments in education, training, and fellowship programs focused on AI governance can empower policymakers in low- and middle-income countries to meaningfully engage in global debates. Just as longtermism has emphasized epistemic humility and forecasting, so too must AI governance incorporate diverse perspectives to improve decision-making quality.
Embedding Longtermist Principles in Regional Institutions
Regional organizations such as the African Union, ECOWAS, and ASEAN can serve as testing grounds for embedding intergenerational ethics into policy frameworks. For instance, chartering advisory councils dedicated to long-term technological risks could institutionalize foresight capacities in a way aligned with longtermist values.One might object: aren’t these organizations already overburdened with urgent crises? Yes, but this is where longtermism’s value becomes clear. Many of the near-term crises like climate adaptation, food security, technological inequality, are exactly the ones that longtermist frameworks can help navigate, because they demand thinking beyond election cycles or donor funding rounds.
Inclusive Global Governance
Global initiatives on AI standards, safety, and coordination must not remain exclusive clubs of technologically dominant nations. Longtermists can advocate for inclusive decision-making structures, ensuring that states representing the majority of humanity have a voice in shaping the trajectory of transformative AI.It is not enough for powerful states to invite token representation. We need genuine participation, where perspectives from Lagos, Nairobi, or Dhaka shape the norms and standards that will govern transformative technologies. From a forecasting perspective, diverse viewpoints improve our models of the future. Narrow epistemic circles miss crucial risks.
A Case for Longtermist Action Today
Some critics of longtermism argue that its focus on the distant future risks neglecting present injustices. Of course, some might argue that focusing on the Global South dilutes scarce longtermist resources. Shouldn’t we prioritize where the cutting-edge labs are, Silicon Valley, London, Shenzhen?
This objection has force, but I think it misunderstands the point. Longtermism is not only about mitigating the risks of frontier technologies; it is also about shaping the global distribution of resilience. If a misaligned AI emerges tomorrow, yes, the labs matter most. But if AI transforms society gradually, which many experts think is more likely, then the question of who benefits, who loses, and who governs becomes just as existential.
This is precisely the type of “dual-use” action that strengthens longtermism’s practical appeal.
Conclusion
Reading Essays on Longtermism made me realize how much more work remains to connect the lofty philosophical arguments to the messy realities of global politics. Barrett and Schmidt ask: what political structures can represent the interests of future generations? My answer, building on their framework, is that those structures must not be confined to wealthy democracies.
The distant future will not be decided in Oxford seminar rooms or Palo Alto boardrooms alone. It will be decided in Abuja, Dhaka, São Paulo, and Nairobi, in the choices that billions of people make, and in the institutions that guide them. If longtermism is to succeed, it must expand its scope to include these voices, these contexts, and these futures.
For me, as someone working on community building in Nigeria and engaging with questions of AI safety, this is not an abstract claim. We already see glimpses of both promise and peril in how technologies are deployed here. The question is whether we will have the foresight, and the courage, to prepare our institutions now.
Longtermism asks us to take responsibility not just for the present, but for the arc of history that stretches beyond us. And if that responsibility is real, then the Global South must be part of the story, not as an afterthought, but as a cornerstone of humanity’s shared future.