Aspiring AI Safety Researchers: Consider “Atypical Jobs” in the Field Instead.

By Harrison 🔸 @ 2025-10-06T06:23 (+45)

Housekeeping notes:

_____

TLDR: Non-research roles in AI safety don't appear to get enough attention, despite a lot of professional benefits that they offer (e.g. skillbuilding in neglected-yet-important skills like people management; higher impact). People who might otherwise be interested in AI safety research careers could consider more atypical roles (and many of these can still feel "research adjacent"!)
_____

In 2024, I made a decision to move across the ocean and start a new job as the co-director of the Cambridge AI Safety Hub (CAISH). Despite being initially very uncertain about the role (I almost turned it down), I think this was the best career move I’ve made in my life. This was at least 10x as good as what I would have done otherwise, and since has bootstrapped my career to (just one year later) managing the hiring, team, programs, and events of a multi-million dollar AI safety research and governance organisation; it’s also lead me to receive many other job opportunities in the space.

Before I decided to move to CAISH, my background was ~only in doing direct technical research on AI (safety), machine learning, and (a little bit of) experimental physics. My mainline career plan to “make advanced AI go well” was pursuing technical AI safety research; I found research interesting and thought I was reasonably good at it, and thought it was fairly important to do... what more should I need? In retrospect, I think that me pursuing only technical research would’ve been far worse for me (professionally), and far worse for the world (from the point of view of reducing “total likelihood of AI catastrophe during our lifetime”). This is because trying a non-research role allowed me to test out and grow in many different skills that (I believe) are far less common in the AI safety community than “good research skills”.

Most of the blanket advice people get about AI safety careers is to contribute to research (governance or technical). This is reasonable advice, and I’m confident that more research is quite valuable to the field! However, it seems to me that far too few people (on the margin, at the time of writing in October 2025) are thinking about careers unrelated or only adjacent to research. If you are an aspiring AI safety researcher whose main goal is to reduce catastrophic AI risks, I think it could be wise (personally, and for impact-oriented reasons) to explore non-research alternatives.

What are some examples of atypical AI safety careers that might fall into this category? In no particular order:

Now, some (non-exhaustive) reasons you should try an atypical AI safety path/career:

There’s more I could say about this, and much more I could add about my own personal experience learning from my roles at CAISH and ERA. However, in the spirit of posting quickly and regularly, I’ve chosen not to edit/contribute to this post more yet.

One final caveat: there are also reasons why you might want to double down on research, and not switch to a different role in the field. Some of these might be:


ceselder @ 2025-10-06T12:24 (+4)

I wonder if the disproportionate amount of people that pursuing research can be partly explained because technical skills are probably more prevalent among EA-types than people skills.

Nadia Montazeri @ 2025-10-06T17:47 (+2)

I imagine the research-adjacent roles are just as competitive, if not more so (lots of people want to contribute to this field but exclude research because they don't come from a technical background). Got any numbers on how competitive those roles are?