Convergence 2024 Impact Review

By David_Kristoffersson, Gwyn Glasser @ 2025-03-24T20:28 (+38)

Convergence 2024 Impact Review home page.

Impact overview

2024 marked the first full year with the new Convergence Analysis 9-person team. This year we published 20 articles on understanding and governing transformative AI. Our research impacted regulatory frameworks internationally. In the US we provided consultation to the Bureau of Industry and Security that directly informed their proposed rule on reporting requirements for dual-use AI, while in the EU we saw specific recommendations incorporated into the EU AI Act GPAI Code of Practice. We led expert field-building around AI’s economic impacts through the Threshold 2030 conference, and AI scenario modeling via the AI Scenarios Network. Our work reached mainstream media, universities, and over 184,000 viewers on social platforms.
We organized our activities into three programs: AI Clarity, AI Governance, and AI Awareness.

1. AI Clarity: performing AI Scenario Planning

2. AI Governance: producing concrete AI policy recommendations

3. AI Awareness: raising public awareness of AI risks

Convergence’s mission

Our mission is to design a safe and flourishing future for humanity in a world with transformative AI. We consider this a sociotechnical problem: that in addition to addressing the technical considerations of AI, governing institutions and the public need to be involved in order to solve these problems. Our work, following our theory of change, cuts across three interrelated programs:

List of outputs

AI Clarity:

AI Governance:

AI Awareness:

Outcomes and impacts in more detail

AI Clarity

The AI Clarity program explores future scenarios and evaluates strategies for mitigating AI risks. In 2024, AI Clarity projects (1) addressed gaps in foundational knowledge around AI scenario modelling and in gathering practitioners, (2) formalised theories of victory for AI safety work, (3) analyzed consensus on timelines to AGI, and (4) hosted the AGI economics field-building conference Threshold 2030, building on our prior work in AI scenarios. Beyond general field building of AI safety and governance, we are seeding and coordinating specific high value fields of inquiry, especially our work on AI Scenario Planning and AGI Economics.

AI Scenario Planning

Our Scenario Planning work addressed neglected challenges that traditional AI forecasting methods struggle with. We present a complimentary approach to forecasting to support decision-makers preparing for an uncertain future. Our field-building work established the AI Scenarios Network of 30+ researchers across organizations and produced several publications, listed below. This research also directly formed the basis for our Theory of Victory work and our broader research agenda, including a paper AI Emergency Preparedness with external collaborators, and the Threshold 2030 conference.

Key Outputs:

Theories of Victory

The lack of clearly defined success criteria for AI governance makes long-term strategic planning difficult. In 2024 we highlighted a lack of stated theories of victory in AI governance, and examined practical preparedness for best and worst-case scenarios globally. Our work on Theories of Victory  and Emergency Preparedness  was well-received in the research community, receiving strong positive feedback from peers, and generating good engagement on the EA Forum and SSRN. This reception led to a follow-up post on Analysis of Global AI Governance Strategies in collaboration with Sammy Martin (Polaris Ventures). AI Emergency Preparedness was also presented at IAAA's 39th Conference.

Key Outputs:

AGI Economics

We hosted the Threshold 2030 conference in Boston (October 2024) to study the economic impacts of near-term TAI, together with Metaculus and FLI. The conference developed practical AI impact forecasting methods and mapped areas of expert consensus and disagreement. This work established new research priorities and cross-organizational collaborations now informing new projects at Convergence and partner organizations. A 200-page conference report on the findings was published in February 2025.

Key Outputs:

 

AI Timelines

We evaluated forecasts, models and arguments for and against TAI timelines, and made technical approaches more accessible to researchers and policymakers, in conclusion providing further basis for taking short TAI timelines seriously.

Key Outputs:

AI Governance

The AI Governance Program evaluates and makes critical and neglected policy recommendations in AI governance. Our governance work in 2024 produced foundational research into AI governance frameworks and specific policy recommendations.

Technical Controls & Infrastructure

We developed foundational regulatory tools for frontier AI oversight using registration systems and technical attribution mechanisms. Our technical control frameworks directly influenced policy development in multiple jurisdictions:

The Training Data Attribution report was based on research originally commissioned by FLF, who gave highly positive feedback on the commissioned work and expressed strong interest in future partnerships.

Key Outputs:

National Policy Frameworks

Our 2024 research examined emerging approaches to national AI governance in the US, China and the EU. Our national policy frameworks had good traction in both academic and policy spheres in 2024. Soft Nationalization saw use from researchers at the US AI Safety Institute, Harvard AI Student Team, and LawAI, and led to us giving a presentation on the topic at Harvard. The State of the AI Regulatory Landscape report had the highest readership of our publications in 2024 and was integrated into BlueDot Impact's AI governance curriculum. This analysis also identified model registration as a neglected area of research, directly informing our subsequent report on the topic.

Key Outputs:

Strategic Governance Research

Our publications in this area explored international coordination, public administration, and the power dynamics between public and private actors. We also led the publication of The Oxford Handbook of AI Governance, totalling 50 chapters by 75 leading contributors including Anthony Aguirre, Anton Korinek, Allan Dafoe, Ben Garfinkel & Jack Clark. The handbook, work on which started in 2020, has shaped a number of early conversations about AI governance. 

Key Outputs:

AI Awareness

The AI Awareness program works to increase public understanding of AI risks and how to address them, through books, teaching, and media engagement. In 2024, our work to raise public awareness of AI safety reached major platforms, including coverage across 10 leading media outlets including Politico, Forbes, and CBS. Building a God received early feature coverage from Forbes Books, with additional major outlet features confirmed. We produced 23 episodes of the podcast All Thinks Considered featuring leading thinkers in AI and societal betterment, and with content generating 184,000 views on TikTok. We also led two courses at the Toronto Metropolitan University, delivered multiple lectures on 'AI and the Future of Humanity,' and received 200+ subscriptions to our newsletter.

Key Outputs:

Operations

Convergence is an international AI x-risk strategy think tank, spanning the UK, US, Canada and Portugal. In 2024, we expanded our team from 8 to 9 members. We had one staff member leave, and added two new people to the team. Harry Day, our first COO left, and Michael Keough took up the mantle to lead Operations. Gwyn Glasser joined as a new Researcher.

Convergence is funded by individual philanthropists and granting bodies concerned about x-risk, such as FLI and SFF.

2024 budget

2024 Budget: $950k.

Funds raised in 2024: $800k.

2025 budget

2025 budget projection: $875k.

Funds raised in Jan-Feb 2025: $200k.

2025: January and February outcomes

As this impact review is being published in March 2025, we will also here outline the major works we have released in January and February 2025:

2025: Ongoing initiatives

Continuing into 2025, our largest current initiative is The AGI Social Contract, and other ongoing initiatives of ours include AI and International Security, AGI is Near, AI Scenarios Network, and AI Awareness:

Funding Gaps and opportunities

Convergence's 2025 budget is $875,000, with funding set to run out in June 2025. We need $440,000 to continue operations through year-end and another $440,000 to build a six-month reserve.

Beyond these immediate needs, we see some strong opportunities for growing our impact with additional team members:

  1. Communications Director: Our team produces research efficiently, and a dedicated communications specialist would amplify the impact of that research.
  2. Fundraising Specialist: Recruiting a fundraising specialist would allow our other staff to focus better on core research activities and diversify our funding sources beyond traditional x-risk/EA funding ones. Essentially, recovering the cost of this hire and more.
  3. Expanded operations team: Our current ops team is very small, and a single hire here would unlock more productive hours across the entire organization.
  4. Expanded research team: We are confident we can effectively double our team size, allowing us to cover more neglected research areas at greater depth.

Three funding scenarios

Below are three simplified funding scenarios: (1) Maintenance, for sustaining operations, (2) Moderate Growth for growing the team 50%, and (3) Strategic Growth, for more than doubling the team size.

Base: $880,000

This baseline funding would enable Convergence to maintain our current team of 9 members for an additional 12 months beyond our current runway, into July 2026.

Moderate Growth: $1,850,000

With this increased funding, Convergence would add 5 team members and extend our runway into January 2027. This scenario represents a balanced near-term growth trajectory:

Strategic Growth: $4,150,000

This scenario would position Convergence to scale sustainably to add 11 team members, more than doubling our current size, and extending our runway into Jan 2028.

The case for funding Convergence

This year our small team has achieved a significant impact on AI safety through field-defining work, regulatory influence, and cross-sector engagement, on an annual budget of 950k USD. In 2025 we are improving project prioritisation, outreach, and efficiency to further boost our impact, and as of March 2025, we are off to a strong start of the year with five major publications released. With additional resources to address our funding gaps and opportunities, we believe we can significantly scale up our impact. If you may be able to help with supporting our projects, please get in touch at funding@convergenceanalysis.org. We accept major credit cards, PayPal, Venmo, bank transfers, cryptocurrency donations, and stock transfers.

Conclusion

In 2024 we launched a new research institute. We started the year with a new team of 8. There were some hard challenges we were attempting to solve as a research institute for x-risk reduction and future flourishing. Can we combine efficiency with doing deep intellectual research? Combine doing big picture research with actionable research? Combine open academic inquiry with the focus of a startup? And in the end, how do we have a positive impact on x-risk? With the outcomes of the past year, we think we’ve had some very promising successes.

In 2025, we are continuing our work with The AGI Social Contract and other initiatives, orienting ourselves for a world rapidly approaching transformative AI, and building on our proven research model further to make a greater positive impact.

Thank you to all our collaborators and supporters, we wouldn’t be where we are without your help!

We are fundraising! Please get in touch if you are interested in supporting our work or in partnering with us.

Learn more here: