Evan R. Murphy

I'm doing research and other work focused on AI safety/security, governance and risk reduction. Currently my top projects are (last updated Feb 26, 2025):

General areas of interest for me are AI safety strategy, comparative AI alignment research, prioritizing technical alignment work, analyzing the published alignment plans of major AI labs, interpretability, deconfusion research and other AI safety-related topics.

Research that I’ve authored or co-authored:

Before getting into AI safety, I was a software engineer for 11 years at Google and various startups. You can find details about my previous work on my LinkedIn.

While I'm not always great at responding, I'm happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!

Posts

AI Risk: Can We Thread the Needle? [Recorded Talk from EA Summit Vancouver '25]
by Evan R. Murphy @ 2025-10-02 | +6 | 0 comments
Proposal: Funding Diversification for Top Cause Areas
by Evan R. Murphy @ 2022-11-20 | +29 | 0 comments
New US Senate Bill on X-Risk Mitigation [Linkpost]
by Evan R. Murphy @ 2022-07-04 | +22 | 0 comments
New series of posts answering one of Holden's "Important, actionable research...
by Evan R. Murphy @ 2022-05-12 | +9 | 0 comments
Action: Help expand funding for AI Safety by coordinating on NSF response
by Evan R. Murphy @ 2022-01-20 | +20 | 0 comments
People in bunkers, "sardines" and why biorisks may be overrated as a global...
by Evan R. Murphy @ 2021-10-23 | +22 | 0 comments
Evan R. Murphy's Quick takes
by Evan R. Murphy @ 2021-10-22 | +1 | 0 comments