Rohin Shah

Hi, I'm Rohin Shah! I work as a Research Scientist on the technical AGI safety team at DeepMind. I completed my PhD at the Center for Human-Compatible AI at UC Berkeley, where I worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants.

I'm particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? I write up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.

In the past, I ran the EA UC Berkeley and EA at the University of Washington groups.

http://rohinshah.com

Posts

Person-affecting intuitions can often be money pumped
by Rohin Shah @ 2022-07-07 | +94 | 0 comments
DeepMind is hiring for the Scalable Alignment and Alignment Teams
by Rohin Shah, Geoffrey Irving @ 2022-05-13 | +102 | 0 comments
Rohin Shah's Quick takes
by Rohin Shah @ 2021-08-25 | +6 | 0 comments
[AN #80]: Why AI risk might be solved without additional intervention from...
by Rohin Shah @ 2020-01-03 | +58 | 0 comments
Summary of Stuart Russell's new book, "Human Compatible"
by Rohin Shah @ 2019-10-19 | +33 | 0 comments
Alignment Newsletter One Year Retrospective
by Rohin Shah @ 2019-04-10 | +62 | 0 comments
Thoughts on the "Meta Trap"
by Rohin Shah @ 2016-12-20 | +10 | 0 comments
EA Berkeley Spring 2016 Retrospective
by Rohin Shah @ 2016-09-11 | +6 | 0 comments
EAGxBerkeley 2016 Retrospective
by Rohin Shah @ 2016-09-11 | +18 | 0 comments