tlevin
(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I'm also a proud GWWC pledger and vegan.
Posts
A case for donating to AI risk reduction (including if you work in AI)
by tlevin @ 2024-12-02 | +118 | 0 comments
by tlevin @ 2024-12-02 | +118 | 0 comments
How the AI safety technical landscape has changed in the last year, according to...
by tlevin @ 2024-07-26 | +83 | 0 comments
by tlevin @ 2024-07-26 | +83 | 0 comments
Notes on nukes, IR, and AI from "Arsenals of Folly" (and other books)
by tlevin @ 2023-09-04 | +21 | 0 comments
by tlevin @ 2023-09-04 | +21 | 0 comments
Common-sense cases where "hypothetical future people" matter
by tlevin @ 2022-08-12 | +107 | 0 comments
by tlevin @ 2022-08-12 | +107 | 0 comments
What work has been done on the post-AGI distribution of wealth?
by tlevin @ 2022-07-06 | +16 | 0 comments
by tlevin @ 2022-07-06 | +16 | 0 comments
(Even) More Early-Career EAs Should Try AI Safety Technical Research
by tlevin @ 2022-06-30 | +86 | 0 comments
by tlevin @ 2022-06-30 | +86 | 0 comments