Rohin Shah
Hi, I'm Rohin Shah! I work as a Research Scientist on the technical AGI safety team at DeepMind. I completed my PhD at the Center for Human-Compatible AI at UC Berkeley, where I worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants.
I'm particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? I write up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.
In the past, I ran the EA UC Berkeley and EA at the University of Washington groups.
Posts
DeepMind is hiring for the Scalable Alignment and Alignment Teams
by Rohin Shah, Geoffrey Irving @ 2022-05-13 | +102 | 0 comments
by Rohin Shah, Geoffrey Irving @ 2022-05-13 | +102 | 0 comments
[AN #80]: Why AI risk might be solved without additional intervention from...
by Rohin Shah @ 2020-01-03 | +58 | 0 comments
by Rohin Shah @ 2020-01-03 | +58 | 0 comments
Summary of Stuart Russell's new book, "Human Compatible"
by Rohin Shah @ 2019-10-19 | +33 | 0 comments
by Rohin Shah @ 2019-10-19 | +33 | 0 comments