Creating 'Making God': a Feature Documentary on risks from AGI

By ConnorA @ 2025-04-15T14:14 (+21)

Donate to our Manifund (as of 14.04.25 we have two more days of donation matching up to $10,000). Email me at connor.axiotes or DM me on Twitter for feedback and questions.

Project summary:

  1. To create a cinematic, accessible, feature-length documentary. 'Making God' is an investigation into the controversial race toward artificial general intelligence (AGI).
  2. Our audience is a largely non-technical one, and so we will give them a thorough grounding in recent advancements in AI, to then explore the race to the most consequential piece of technology ever created.
  3. Following in the footsteps of influential social documentaries like Blackfish/Seaspiracy/The Social Dilemma/Inconvenient truth/and others - our film will shine a light on the risks associated with the development of AGI.
  4. We are aiming for film festival acceptance/nomination/wins and to be streamed on the world’s biggest streaming platforms.
  5. This will give the non-technical public a strong grounding in the risks from a race to AGI. If successful, hundreds of millions of streaming service(s) subscribers will be more informed about the risks and more likely to take action when a moment may present itself.

Rough narrative outline:

Our basic model for why this is needed:

***

Update [14.04.25]

1) Prof. Rose Chan Loui is the Founding Executive Director, Lowell Milken Center on Philanthropy and Nonprofits at UCLA.

2) Prof. Ellen Aprill is Senior Scholar in Residence and taught Political Activities of Nonprofit Organizations at UCLA in 2024.

3) Holly Elmore is the Executive Director of Pause AI US.

4) Eli Lifland is the Founding Researcher at the AI Futures Project, and a top forecaster.

5) Heather-Rose is Government Affairs Lead in LA for Labor Union SAG-AFTRA.

Civil Society

Upcoming Interviews

  1. Cristina Criddle, Financial Times Tech Correspondent covering AI - recently broke the Financial Times story about OpenAI giving days long safety-testing rather than months for new models).
  2. David Duvenaud, Former Anthropic Team Lead.
  3. John Sherman, Dads Against AI and podcasting.

Potential Interviews

  1. Jack Clark (we are in touch with Anthropic Press Team).
  2. Gary Marcus (said to get back to him in a couple weeks).

Interviews We’d Love

  1. Kelsey Piper, Vox.
  2. Daniel Kokotajlo, formerly OpenAI.
  3. AI Lab employees.
  4. Lab whistleblowers.
  5. Civil society leaders.

Points to Note:

Project Goals:

  1. We are aiming for film festival acceptance/nomination/wins and to be streamed on the world’s biggest streaming platforms, like Netflix, Amazon Prime, and Apple TV+.
  2. To give the non-technical public a strong grounding in the risks from a race to AGI.
  3. If successful, hundreds of millions of streaming service(s) subscribers will be more informed about the risks and more likely to take action when a moment may present itself.
  4. As timelines are shortening, technical alignment bets are looking less likely to pay off in time for AI, international governance mechanisms seem to be breaking down - and so our goal is to influence public opinion on the risks so that they might take political or social action before the arrival of AGI. If we do this right, we could have a high chance of moving the needle.

Some rough numbers:

How will this funding be used?

In order to seriously have a chance at being on streaming services, the production quality and entertainment value has to be high. As such, we would need the following funding over the next 3 months to create a product like this.

Accommodation [Total: £30,000]

Travel [Total: £13,500]

Equipment [Total: £41,000]

Production Crew (30 Days of Day Rate) [Total: £87,000]

Director (3 Months): [Total: £15,000]

Executive Producer (3 months): [Total: £15,000]

MISC: £25,000 (to cover any unforeseen costs, get legal advice, insurance and other practical necessities).

TOTAL: £226,500 ($293,046)

Who is on your team? What's your track record on similar projects?

Mike Narouei [Director]:

Watch Your Identity Isn’t Yours - which Mike filmed, produced, and edited when he was at Control AI. The still above is from that.

Connor Axiotes [Executive Producer]:

Donate to our Manifund (as of 14.04.25 we have two more days of donation matching up to $10,000). Email me at connor.axiotes or DM me on Twitter for feedback and questions.


OscarD🔸 @ 2025-04-16T18:39 (+7)

Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)

calebp @ 2025-04-16T21:47 (+7)

Given that they've made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don't think we are in a great place to share our impressions right now, but if Connor would like me to, I'd be happy to share some of my takes in a personal capacity.

ConnorA @ 2025-04-17T01:14 (+3)

Hey! Thanks for the comments. I’d be super happy to hear your personal takes, Caleb!

calebp @ 2025-04-17T01:36 (+6)

Some quick takes in a personal capacity:

  • I agree that a good documentary about AI risk could be very valuable. I'm excited about broad AI risk outreach and few others seem to be stepping up. The proposal seem ambitious and exciting.
  • I suspect that a misleading documentary would be mildly net-negative, and it's easy to be misleading. So far, a significant fraction of public communications from the AI safety community has been fairly misleading (definitely not all—there is some great work out there as well).
  • In particular, equivocating between harms like deepfakes and GCRs seems pretty bad. I think it's fine to mention non-catastrophic harms, but often, the benefits of AI systems seem likely to dwarf them. More cooperative (and, in my view, effective) discourse should try to mention the upsides and transparently point to the scale of different harms.
  • In the past, team members have worked on (or at least in the same organisation) comms efforts that seemed low integrity and fairly net-negative to me (e.g., some of their work on deepfakes, and adversarial mobile billboards around the UK AI Safety summit). Idk if these specific team members were involved in those efforts.
  • The team seems very agentic and more likely to succeed than most "field-building" AIS teams.
  • Their plan seems pretty good to me (though I am not an expert in the area). I'm pretty into people just trying things. Seems like there are too few similar efforts, and like we could regret not making more stuff like this happen, particularly if your timelines are short.


I'm a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it's somewhat net-negative. Overall, I think it's probably pretty promising.

OscarD🔸 @ 2025-04-17T17:48 (+5)

Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)

calebp @ 2025-04-17T18:46 (+6)

To be clear, I'm open to building broad coalitions and think that a good documentary could/would feature content on low-stakes risks; but, I believe people should be transparent about their motivations and avoid conflating non-GCR stuff with GCR stuff.

ConnorA @ 2025-04-17T23:30 (+5)

Thanks Caleb and Oscar! 

Will write up my full thoughts this weekend. But regarding your worry that our doc will end up conflating deepfakes and GCRs: we don't plan to do this and we are very clear they are different. 

Our model of the non-technical public is that they feel they are at higher risk of job loss than the world ending. So our film intends to explain clearly the potential risks to their jobs - and also show how that same AI that might automate their jobs, could also be used to, for example, create bioweapons for terrorists who may seek to deploy them on the world. We do not (and will not) conflate the two- but both will be included in the film. 

To Oscar: thanks for the comment! Do get in touch if you'd like to help out/thinking of donating.

To Caleb: we really appreciate your comments here, and think they're fair. But although we worked on comms with our former employers, we have different views/ways of communicating than them. (I still think Control AI and Conjecture did and do good comms work on the whole, though). I think if we grabbed a coffee/Zoom call we'd probably see we're closer than you think. 

Have a good day!

SummaryBot @ 2025-04-15T15:57 (+3)

Executive summary: This post introduces Making God, a planned feature-length documentary aimed at a non-technical audience to raise awareness of the risks associated with the race toward AGI; the filmmakers seek funding to complete high-quality production and hope to catalyze public engagement and political action through wide distribution on streaming platforms.

Key points:

  1. Making God is envisioned as a cinematic, accessible documentary in the style of The Social Dilemma or Seaspiracy, aiming to educate a broad audience about recent AI advancements and the existential risks posed by AGI.
  2. The project seeks to fill a gap in public discourse by creating a high-production-value film that doesn’t assume prior technical knowledge, targeting streaming platforms and major film festivals to reach tens of millions of viewers.
  3. The filmmakers argue that leading AI companies are prioritizing capabilities over safety, international governance is weakening, and technical alignment may not be achieved in time—thus increasing the urgency of public awareness and involvement.
  4. The team has already filmed five interviews with legal experts, civil society leaders, forecasters, and union representatives to serve as a “Proof of Concept,” and they are seeking further funding (~$293,000) to expand production and ensure festival/streaming viability.
  5. The documentary’s theory of impact is that by informing and emotionally engaging a mass audience, it could generate public pressure and policy support for responsible AI development during a critical window in the coming years.
  6. The core team—Director Mike Narouei and Executive Producer Connor Axiotes—bring strong credentials from viral media production, AI safety advocacy, and political communications, and are currently fundraising via Manifund (with matching donations active as of April 14, 2025).

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.