Aspiring AI Safety Researchers: Consider “Atypical Jobs” in the Field Instead.
By Harrison 🔸 @ 2025-10-06T06:23 (+45)
Housekeeping notes:
- Crossposted on my Substack
- This was hastily written; there might be some bad phrasing here and there – Forgive me!
- This advice may not apply to everyone. It was inspired a lot by my personal experience moving away from research and into “other” roles in AI safety, starting around September 2024.
- I’ve previously written a little bit about why I’ve enjoyed fieldbuilding and why I’ve found it valuable in the bottom of this other post as well)
_____
TLDR: Non-research roles in AI safety don't appear to get enough attention, despite a lot of professional benefits that they offer (e.g. skillbuilding in neglected-yet-important skills like people management; higher impact). People who might otherwise be interested in AI safety research careers could consider more atypical roles (and many of these can still feel "research adjacent"!)
_____
In 2024, I made a decision to move across the ocean and start a new job as the co-director of the Cambridge AI Safety Hub (CAISH). Despite being initially very uncertain about the role (I almost turned it down), I think this was the best career move I’ve made in my life. This was at least 10x as good as what I would have done otherwise, and since has bootstrapped my career to (just one year later) managing the hiring, team, programs, and events of a multi-million dollar AI safety research and governance organisation; it’s also lead me to receive many other job opportunities in the space.
Before I decided to move to CAISH, my background was ~only in doing direct technical research on AI (safety), machine learning, and (a little bit of) experimental physics. My mainline career plan to “make advanced AI go well” was pursuing technical AI safety research; I found research interesting and thought I was reasonably good at it, and thought it was fairly important to do... what more should I need? In retrospect, I think that me pursuing only technical research would’ve been far worse for me (professionally), and far worse for the world (from the point of view of reducing “total likelihood of AI catastrophe during our lifetime”). This is because trying a non-research role allowed me to test out and grow in many different skills that (I believe) are far less common in the AI safety community than “good research skills”.
Most of the blanket advice people get about AI safety careers is to contribute to research (governance or technical). This is reasonable advice, and I’m confident that more research is quite valuable to the field! However, it seems to me that far too few people (on the margin, at the time of writing in October 2025) are thinking about careers unrelated or only adjacent to research. If you are an aspiring AI safety researcher whose main goal is to reduce catastrophic AI risks, I think it could be wise (personally, and for impact-oriented reasons) to explore non-research alternatives.
What are some examples of atypical AI safety careers that might fall into this category? In no particular order:
- Fieldbuilding (e.g. running events, fellowships, workshops, etc for the field)
- Research Management
- Founding or leading on organization
- Program Management
- Grantmaking
- Political Advocacy
- Communications related to AI safety
- Other roles where the best job description feels like “generalist” or “wearer of many hats”
Now, some (non-exhaustive) reasons you should try an atypical AI safety path/career:
- There appears to be a huge supply-demand imbalance between aspiring researchers and research roles.
- Fellowships like GovAI, IAPS, ERA, and MATS get thousands of applicants per cycle, while each program has <30 places available. They are often more competitive than admissions to Ivy League universities.
- There is a large demand for non-research or “research-adjacent” shaped roles at many organisations in the space.
- These organizations are often trying to source talent for positions like research managers and program managers, which far fewer people apply to! (This is especially shocking to me, since I believe that meta-level roles like programs and research management are higher leverage and thus higher-impact than research roles, in many cases (with exceptions))
- Research skills (i.e. understanding of the research process; deep technical knowledge) can be extremely valuable in these other atypical positions, yet much harder to come by in applicants for these atypical positions.
- Trying out alternative options to research doesn’t “lock you in”; it just lets you know if there are other (non-research) things you might be better suited for or enjoy more.
- When I started my role at CAISH, my default plan was to go back to pursuing research (e.g. start a PhD, for instance) after 1 year in the role. I then realized that there are aspects of running organisations, projects, managing teams, and other pieces of the role which I actively enjoyed more than research on a day to day basis. I would not have found out how much I enjoyed (or how much I excelled at) “organization directing” if I had not taken a break from pursuing research to work this opportunity.
- Pursuing non-research AIS careers increases variance in the portfolio of people working on making advanced AI go well.
- I sometimes imagine the collection of people dedicating their career to AI safety as a big “AI safety careers index fund” where everyone is trying a bunch of different roles (different “stocks”), some of which are riskier and stranger (i.e. org building) and some of which are more robust (i.e. research, fieldbuilding). By working on a non-research career path, you are diversifying the collection of AI safety careers in the world, thus making the ephemeral “AI safety careers impact fund” have a higher expected ROI.
- You will learn valuable skills you otherwise might not
- Given hyperscaling companies’ focus on developing technical capabilities of their models and racing towards automating R&D, there’s reason to believe that many pieces of AI safety research and research engineering will be more likely to be automated than things like people management and organisational strategy.
- There are so many skills that are valuable in the working world which are far from “conducting good research”; taking time to gain some of these skills (such as people management, project management, team leadership, budget management, grant writing, etc) is a great way to strengthen your CV.
- For many of the roles listed above, you get to think at a more abstract level about the field’s needs as a whole, rather than developing deep niche expertise in an area that may or may not be useful in the long run.
- AI development is fast and confusing. There are a lot of moving parts, and new news every week that might change your priorities. Understanding and thinking clearly about “what matters most to work on” is not a luxury that many roles allow for. However, meta-level and atypical roles like research management, grantmaking, and fieldbuilding naturally make “understanding the field at a high-level” part of the role.
- This was especially refreshing and helpful for me, when I moved from object level research on a niche problem in adversarial robustness of language models to thinking about how to run good research programs on a field-wide scale.
- Serendipity and location: AI safety roles are often in great locations where many top people in teh field concentrate (i.e. London, Berkeley, DC, elite universities). Working in these spaces and cities with relevant people can provide a huge boost to your network and connections in the field, and ultimately open up new collaboration and job opportunities
- This was especially the case with me joining CAISH, since I had never been based in or near “AI safety hotspots” in my past studies or work.
There’s more I could say about this, and much more I could add about my own personal experience learning from my roles at CAISH and ERA. However, in the spirit of posting quickly and regularly, I’ve chosen not to edit/contribute to this post more yet.
One final caveat: there are also reasons why you might want to double down on research, and not switch to a different role in the field. Some of these might be:
- You might have reason to believe that research is an especially good fit for you, i.e. because of your track record publishing stellar, well-received research or because you’ve been a researcher for the past 10 years and gained a lot of tacit knowledge as a result.
- You might believe that there is a limited window of time where research in AI safety is most useful, and this window is ~all of the time before AI research can be automated. Thus, you might prioritise research over other things (like fieldbuilding) if you think that this window of opportunity is short and closing (i.e. if your AI timelines are very short).
- You may think that exploring other roles wouldn’t provide you with very much information, or wouldn’t boost your professional profile enough to make the switch away from research worth it.
- [Insert many other things I haven’t taken the time to think of]
ceselder @ 2025-10-06T12:24 (+4)
I wonder if the disproportionate amount of people that pursuing research can be partly explained because technical skills are probably more prevalent among EA-types than people skills.
Nadia Montazeri @ 2025-10-06T17:47 (+2)
I imagine the research-adjacent roles are just as competitive, if not more so (lots of people want to contribute to this field but exclude research because they don't come from a technical background). Got any numbers on how competitive those roles are?