Aspiring AI Safety Researchers: Consider “Atypical Jobs” in the Field Instead.

By Harrison 🔸 @ 2025-10-06T06:23 (+73)

Housekeeping notes:

_____

TLDR: Non-research roles in AI safety don't appear to get enough attention, despite a lot of professional benefits that they offer (e.g. skillbuilding in neglected-yet-important skills like people management; higher impact). People who might otherwise be interested in AI safety research careers could consider more atypical roles (and many of these can still feel "research adjacent"!)
_____

In 2024, I made a decision to move across the ocean and start a new job as the co-director of the Cambridge AI Safety Hub (CAISH). Despite being initially very uncertain about the role (I almost turned it down), I think this was the best career move I’ve made in my life. This was at least 10x as good as what I would have done otherwise, and since has bootstrapped my career to (just one year later) managing the hiring, team, programs, and events of a multi-million dollar AI safety research and governance organisation; it’s also lead me to receive many other job opportunities in the space.

Before I decided to move to CAISH, my background was ~only in doing direct technical research on AI (safety), machine learning, and (a little bit of) experimental physics. My mainline career plan to “make advanced AI go well” was pursuing technical AI safety research; I found research interesting and thought I was reasonably good at it, and thought it was fairly important to do... what more should I need? In retrospect, I think that me pursuing only technical research would’ve been far worse for me (professionally), and far worse for the world (from the point of view of reducing “total likelihood of AI catastrophe during our lifetime”). This is because trying a non-research role allowed me to test out and grow in many different skills that (I believe) are far less common in the AI safety community than “good research skills”.

Most of the blanket advice people get about AI safety careers is to contribute to research (governance or technical). This is reasonable advice, and I’m confident that more research is quite valuable to the field! However, it seems to me that far too few people (on the margin, at the time of writing in October 2025) are thinking about careers unrelated or only adjacent to research. If you are an aspiring AI safety researcher whose main goal is to reduce catastrophic AI risks, I think it could be wise (personally, and for impact-oriented reasons) to explore non-research alternatives.

What are some examples of atypical AI safety careers that might fall into this category? In no particular order:

Now, some (non-exhaustive) reasons you should try an atypical AI safety path/career:

There’s more I could say about this, and much more I could add about my own personal experience learning from my roles at CAISH and ERA. However, in the spirit of posting quickly and regularly, I’ve chosen not to edit/contribute to this post more yet.

One final caveat: there are also reasons why you might want to double down on research, and not switch to a different role in the field. Some of these might be:


Nadia Montazeri @ 2025-10-06T17:47 (+6)

I imagine the research-adjacent roles are just as competitive, if not more so (lots of people want to contribute to this field but exclude research because they don't come from a technical background). Got any numbers on how competitive those roles are? 

Ahmed Amer @ 2025-10-20T11:29 (+1)

Seconding the ask for numbers - I can see a case why non- or semi-technical roles might actually be just as competitive - a much wider pool of people wanting to get into AI who are not necessarily at the elite of ML engineering and research skills. 

ceselder @ 2025-10-06T12:24 (+5)

I wonder if the disproportionate amount of people that pursuing research can be partly explained because technical skills are probably more prevalent among EA-types than people skills.

Reed Wilson @ 2025-10-16T15:13 (+1)

Great read. Currently seeing if there is any way that I could meaningfully contribute to AI Safety given that I do not come from that world at all. This changed my mind a bit towards the idea that I might be able to.

It seems like we just need as many people in this field as possible as quickly as possible given even the most conservative risk assessments. 

Also interesting that some of these non technical roles could be even more impactful in some cases.