Ryan Kidd
- Co-Director at ML Alignment & Theory Scholars Program (2022-present)
- Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
- Manifund Regrantor (2023-present)
- Advisor, Catalyze Impact (2023-present)
- Advisor, AI Safety ANZ (2024-present)
- Ph.D. in Physics at the University of Queensland (2017-2023)
- Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
Posts
Talent Needs of Technical AI Safety Teams
by Ryan Kidd, yams, Carson Jones, McKenna_Fitzgerald @ 2024-05-24 | +51 | 0 comments
by Ryan Kidd, yams, Carson Jones, McKenna_Fitzgerald @ 2024-05-24 | +51 | 0 comments
SERI ML Alignment Theory Scholars Program 2022
by Ryan Kidd, Victor Warlop, Oliver Z @ 2022-04-27 | +57 | 0 comments
by Ryan Kidd, Victor Warlop, Oliver Z @ 2022-04-27 | +57 | 0 comments
How will the world respond to "AI x-risk warning shots" according to reference...
by Ryan Kidd @ 2022-04-18 | +18 | 0 comments
by Ryan Kidd @ 2022-04-18 | +18 | 0 comments