AI Safety Overview: CERI Summer Research Fellowship
By Jamie B @ 2022-03-24T15:12 (+29)
Introduction to the CERI Summer Research Fellowship
The Cambridge Existential Risks Initiative (CERI, pronounced /ˈkɛri/) has opened applications for an in-person, paid, 10-week Summer Research Fellowship (SRF) focused on existential risk mitigation, taking place from July to September 2022 in Cambridge, and aimed at all aspiring researchers, including undergraduates. This post focuses specifically on the track AI-related projects.
To apply and find out more, please visit the CERI Fellowship website, and more information on the AI Safety programme.
If you’re interested in mentoring research projects for this programme, please submit your name, email and research area here, and we will get in touch with you in due course.
The deadline to apply is April 3 2022 23:59 UTC.
For more information, see the Summer Research Fellowship announcement forum post.
AI Safety Overview
The development of general, superintelligent machines would likely be a significant event in human history. It stands to overhaul the world’s economic systems and other existing power dynamics, up to and including possibly making humanity the “second species”, or no longer the primary decision makers on the shape of the future.
Maintaining control of a future in which entities exist that are more intelligent than humans could require a large effort ahead of time, and is a pressing issue for us to map out today. We identify two broad efforts in the AI safety typology:
- AI alignment: goal misalignment may emerge from miscommunication of our intentions, or the systems’ development of instrumental goals. We might also be interested in methods for control, or intervention, if misalignment emerges.
- Governance and policy: economic and political conditions under which these machines are developed will likely impact our ability and appetite to align AI. Once advanced AI is developed, we need to avoid malevolent or misguided usage of it, and to consider its interactions with other dangerous technologies.
AI Safety CERI Summer Research Fellowship
To tackle these problems, we need researchers who think deeply about the potential threats that could result from the existence of advanced AI. We think there are not enough researchers who ask themselves what could go wrong with advanced AI, envision solutions, and break them down into tractable research questions and agendas.
The CERI SRF forms a community amongst a group of the next generation of researchers, whilst advancing their careers in existential risk mitigation. We expect there to be around 10 fellows working on AI safety working in our Cambridge office space and attending our other fellowship events, with whom you can share ideas and rubber-duck coding errors off of.
We also source mentorship from established researchers to guide fellows’ exploration of AI alignment or governance, to help fellows contribute to the field and develop their portfolio of work, and spark further opportunities and collaborations.
People interested in researching alignment and governance should consider this programme as a way of testing personal fit for doing research mitigating existential risk from AI. Apply now to gain the network, space and funding to start a career in AI existential risk mitigation research!