elteerkers's Quick takes
By elteerkers @ 2025-04-02T15:25 (+4)
nullelteerkers @ 2025-04-02T15:25 (+3)
We just launched a free, self-paced course: Worldbuilding Hopeful Futures with AI, created by Foresight Institute as part of our Existential Hope program.
The course is probably not breaking new conceptual ground for folks here who are already “red-pilled” on AI risks — but it might still be of interest for a few reasons:
-
It’s designed to broaden the base of people engaging with long-term AI trajectories, including governance, institutional design, and alignment concerns.
-
It uses worldbuilding as an accessible gateway for newcomers — especially those who aren’t in technical fields but still want to understand and shape AI’s future.
We’re inviting contributions from more experienced thinkers as well — to help seed more diverse, plausible, and strategically relevant futures that can guide better public conversations.
Guest lectures include:
Helen Toner (CSET, former OpenAI board) on frontier lab dynamics
Anton Korinek (Brookings) on economic impact of AI
Anthony Aguirre (FLI) on existential risk
Hannah Ritchie (Our World in Data) on grounded progress
Glen Weyl (RadicalxChange) on plural governance
Ada Palmer (historian & sci-fi author) on long-range thinking
If you’re involved in outreach, education, or mentoring, this might be a good resource to share. And if you're curious about how we’re trying to translate these issues to a wider audience — or want to help build out more compelling positive-world scenarios — we’d love your input.
👉 https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/
Would love feedback or questions — and happy to incorporate critiques into the next iteration.