Why experienced professionals fail to land high-impact roles (FBB #5)

By gergo @ 2025-04-10T12:44 (+94)

Crossposted on Substack and Lesswrong.

Introduction

There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof.

Subscribe to The Field Building Blog

On professionals looking for jobs

It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety.

Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences.

  1. They do an AI Safety intro course
  2. They decide to pivot their career
  3. They start applying for highly selective jobs, including ones at OpenPhilanthropy
  4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience
  5. They don’t get any feedback
  6. They are confused as to why and start questioning whether they can contribute to AI Safety

If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks.

But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might be that you fail to communicate your cultural fit and contextual understanding of AI Safety to hiring managers, or you still lack those for now.

This isn’t just on you, but on all of us as a community. In this article, I will outline some ways that job seekers can better navigate the job market, but focus less on and how the community can avoid altruistic talent getting lost on the journey. That is worth its own forum post!

To make it clear again, this is not the only reason you might be rejected, but one that might be the least obvious to people failing to land roles. Now let's look at what you can do.

What I mean by context

A highly skilled professional who was new to the AI Safety scene at the time told me they applied to the Chief of Staff role at OpenPhilantropy. They got rejected in the very first round, but they didn’t understand why. Shortly after, they went to an EAG conference and told me: “Oh, I get it now.”

Here is a list of resources that I would have sent their way before their job search, had I written it up at the time. Context is a fuzzy concept, but I will try my best to give you a sense of what I mean by it. Let’s break it down into different parts.

Understanding the landscape

Previous involvement in the movement

Understanding concepts

Whether you have come across the following concepts:

Many of them are based on different assumptions, and understanding them does not mean that you have to be on board with them. And don’t worry, it’s frustrating to other newcomers too. I linked to all the explanations from above, so I hope we can still be friends.

Familiarity with thought leaders and their work

Having read books such as

Familiarity with he following people and how they influenced the movement. I’m probably forgetting some.

Heard of the people and know the motivations of those who are regarded as “AI Safety’s top opponents”:

Understanding culture

A caveat to the list above is that many of the items will be more important for AIS roles within the EA/rationalist space. As the AIS community grows, I expect this to change and some of the items on the list above will become less important.

Visualising your journey in AI Safety

Think of the y-axis as the skills needed for a given role. The x-axis refers to the context people have. Different roles require varying levels of skills and context. My claim is that the closer one is towards the top right, the better position they are in to land a job and make a big impact.[4]

Now let’s put you on the map, or rather, the different career profiles that want to contribute to AI Safety.

I’m sure I’m not doing justice to some of the orgs, but this just means to be an illustration.

Now let’s also map some of the different opportunities that someone might apply to. The squares refer to the target audiences for these opportunities.

If you are an experienced professional who is new to AI Safety, this is why you don’t get far in hiring rounds. You may have the skills, but not enough context - yet.

Understand hiring practices

The current state of the AI Safety job market is nowhere near ideal. My hope is that by shedding some light on how it works, you will get a better sense of how to navigate it.

If your strategy is to just apply to open hiring rounds, such as through job ads that are listed on the 80,000 Hours job boards, you are cutting your chances of landing a role by ~half. It’s hard to know the exact figure, but I wouldn’t be surprised if as many as 30-50% of paid roles in the movement aren’t being recruited through traditional open hiring rounds, but instead:

Why is that?

What you can do

The good news is that it’s possible to level up your context pretty fast. Based on what professionals have told me, the community is also really open and helpful, so you can have a lot of support if you know where and what to ask.

Networking

If the picture I painted above is true, you need to get out there and network, so you can be at the right place at the right time.

Improve your epistemics

You can start with the list of concepts and books I mentioned above. In the future, I plan on writing up a proper guide, similar to this post about skill levels in research engineering.

Signaling value-alignment

Of course, I’m not saying that you should fake being more or less worried about AI than you actually are. While it’s tempting to conform to the views of others, especially if you are hoping to land a role to work with them. It’s not worth it, as you wouldn’t excel at an organisation where you feel like you can’t be 100% honest.

Team up with high-context young people:

Apart from taking part in programs such as those of Successif and HIP (that have limited slots), I would like to see experienced professionals new to AI Safety team up with young professionals who are more embedded in the community but lack the experience to fundraise for ambitious projects by themselves. The closest thing we have to this at the moment is Agile for Good, a program that connects younger EA/AIS people to experienced consultants.

Be patient and persistent:

Landing a job in AI Safety often takes way longer than in the “real world”. Manage your expectations and join smaller (volunteer) projects in the meantime to build context.

Continuously get feedback on your plans from high-context people. A good place for this is at EAG(x) conferences, but you can also post in the AI alignment slack workspace, people will be happy to give you feedback.

Which roles does the “context-thesis” apply to?

As I already mentioned before, many of the items will be more important for AIS roles that are within the EA/rationalist space. Even within, it is going to be more relevant for some roles than others.

Roles for which I think context is less important:

I expect it to be more important for roles in:

On seniority

Especially for senior roles that require a lot of context high-value alignment, I would expect hiring managers to opt for someone less experienced with high context and levels of value alignment as opposed to risking having to argue with an experienced professional (who is often going to be older than them) about which AI risks are the most important to mitigate.

Hiring managers will expect that, on average, it is harder to change the mind of someone older (which is probably true, even if it’s not true in your case!)

I also expect context to be less important for junior roles, as orgs have more leverage to guide a younger person into “the right direction”. At the same time, I don’t expect this to be an issue often, as there are a lot of high-context young people in the movement.

Conclusion

You have seen above just how and why the job market is so opaque. This is neither good nor intentional, it’s just what it is for now.

What I don’t want to come across as is saying that what we need is an army of like-minded soldiers, as that’s not the case. All I intend to show is that there is value in being able to “speak the local language”. Think of context as a stepping stone that can put you in a position of being able to then spread your knowledge in the community. We need fresh ideas and diversity of thought. Thank you for deciding to pivot your career to AI Safety, as we really need you.  

Thank you to Miloš Borenović for providing valuable feedback on this article. Similarly, thanks to Oscar for doing the same, as well as providing support with editing and publishing.

  1. ^

     As an example, Bluedot often rejects otherwise promising applicants simply because they have a bad application. Many of these people then get into the program after the 3rd time of trying. I’m not sure if it’s about them gaining more context, or just putting more effort into the application.

  2. ^

     Which is often not public or written up even internally in the AIS space. Eh. Here is one that’s really good though.

  3. ^

     I’m not sure how widely this is read, but it gives a good summary of the early days of the rationalist and therefore AI Safety movements.

     

  4. ^

     This is not meant to be a judgment about people’s intrinsic worth. It’s also not to say that you will always have more impact. It’s possible to have a huge influence with lower levels of context and skills if you are at the right place at the right time. Having said that, the aim of building the field of AI Safety, as well as your career journey, is to get further and further towards the top right, as this is what will help you to have more expected impact.

  5. ^

     A friend told that an established org she was applying to flew out the top two candidates to the org’s office so they can co-work and meet the rest of the team for a week. Aside from further evaluating their skills, this also serves as an opportunity to see how they get along with other staff and fit the organisational culture.

  6. ^

     Someone wrote a great post about this, but I couldn’t find it. Please share if you do!

  7. ^

     There is a good post on criticising the importance of value alignment in the broader movement, but I think most or the arguments apply less to value alignment within organisations.


Conor Barnes 🔶 @ 2025-04-11T09:21 (+15)

If your strategy is to just apply to open hiring rounds, such as through job ads that are listed on the 80,000 Hours job boards, you are cutting your chances of landing a role by ~half. It’s hard to know the exact figure, but I wouldn’t be surprised if as many as 30-50% of paid roles in the movement aren’t being recruited through traditional open hiring rounds ...


This is my impression as well, though heavily skewed by experience level. I'd estimate that >80%+ of senior "hires" in the movement occur without a public posting, and something like 20% of junior hires. 

As an aside and as ever though, I'd encourage people to not get attached to finding a role "in the movement" as a marker of impact. 

LeahC @ 2025-04-10T21:14 (+11)

Experienced professionals can contribute to high-impact work without fully embedding themselves in the EA community. For example, one of my favorite things is connecting experienced lobbyists (20-40+ years in the field) with high-impact organizations working on policy initiatives. They bring needed experience and connections, plus they often feel like they are doing something positive.

Anyone who has worked both inside and outside of the EA community will admit that EA organizations are weird. That is not necessarily a bad thing, but it can mean that people very established in their careers could find the transition uncomfortable. 

For EAs reading this, I highly recommend seeking out professionals in their fields of expertise for short-term or project-specific work. If they fit and you want to keep them, that’s great. If not, you get excellent service on a tough problem that may not be solved within the EA community. They get a fun story about an interesting client, and can move on with no hard feelings.

Geoffrey Miller @ 2025-04-11T00:40 (+9)

Good post. Thank you.

But, I fear that you're overlooking a couple of crucial issues:

First, ageism. Lots of young people are simply biased against older people -- assuming that we're closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I've encountered this often in EA. 

Second, political bias. In my experience, 'signaling value-alignment' in EA organizations and AI safety groups isn't just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It's also a matter of signaling left-leaning political values, atheism, globalism, etc -- values which have no intrinsic or logical connection to EA or AI safety, but which are simply the water in which younger Millennials and Gen Z swim. 

Patrick Gruban 🔸 @ 2025-04-11T07:49 (+9)

First, ageism. Lots of young people are simply biased against older people -- assuming that we're closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I've encountered this often in EA. 

I'm not sure what age group you're referring to, but as someone who just turned 50, I can't relate. I did have to upskill not only on subject matter expertise (as mentioned in the post) but also on ways that people of the age group and the community are communicating, but this didn't seem much different than switching fields. The field emphasizes open-minded truth-seeking, and my experience has shown that people are receptive to my ideas if I am open to theirs.

Second, political bias. In my experience, 'signaling value-alignment' in EA organizations and AI safety groups isn't just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It's also a matter of signaling left-leaning political values, atheism, globalism, etc -- values which have no intrinsic or logical connection to EA or AI safety, but which are simply the water in which younger Millennials and Gen Z swim.

The EA community as a whole is indeed more left-leaning, but I feel that this is less the case in AI safety nonprofits than in other nonprofit fields. It took me some time to realize that my discomfort about being the only person with different views in the room didn't mean that I was unwelcome. At least I was with people who were more engaged in EA or who were working in this field.

At the same time, organizations that are not aware of their own biases sometimes end up hiring people who are very similar to their founders or are unable to integrate more experienced professionals. This is something to be aware of.

Peter Drotos 🔸 @ 2025-04-18T05:26 (+8)

I think the usual path at the start is depicted accurately. Companies try to avoid investing in many people so labour with a given skill/experience is often scarse resource. In my industry, experienced people are approached with a new opportunity (many from well-known firms in the field) each week by headhunters without even asking for it. So when you get the message that work is needed in AI, the natural reaction is “just tell me where I should apply” and the answer usually is the 80k job board or similar. There is a gap there.


So I really like the Visualising your journey figures, I think these help a lot to set appropriate expectations. (I personally spent 5-15h/w on my transition in the last two years and still waiting for the first offer which meets the bar I’ve set for myself.)


So far, I mostly felt the lack of context limiting in the early days when I was actually trying to gain more context. The reason I think was similar (opportunities like 80k advising and EAGx also exected significant context). This  makes sense, but I think there’s room for improvement by being more transparent saying things like “we expect this opportunity to be most useful for (and hence prioritizing) people with basic knowledge about EA e.g. after doing the intro course” Note that I think my  background (hardware) puts me in the nieche bucket so context not coming up as a limiting factor in job applications aligns with the text.
 

gergo @ 2025-04-18T09:58 (+2)

I mostly felt the lack of context limiting in the early days when I was actually trying to gain more context.

Strongly agree, once you "have your foot in the door" its much easier to get additional context as you know where to look for it. 

Thanks for sharing your experience!

David_Kristoffersson @ 2025-04-10T23:22 (+8)

Speaking as a hiring manager at a small group in AI safety/governance who made an effort to not just hire insiders (it's possible I'm in a minority -- don't take my take for gospel if you're looking for a job), it's not important to me that people know a lot about in-group language, people, or events around AI safety. It is very important to me that people agree with foundational ideas such as to actually be impact-focused and to take short-ish AI timelines and AI risk seriously and have thought about it seriously.

gergo @ 2025-04-14T11:16 (+2)

To follow up on this:

it's not important to me that people know a lot about in-group language, people, or events around AI safety

I can see that people and events are less important, but as far as concepts go, I presume it would be important for them to know at least some of the terms, such as x/s risk, moral patienthood, recursive self-improvement, take-off speed, etc.

As far as I know, really none of these are widely known outside of the AIS community, or do you mean something else by in-group language?

David_Kristoffersson @ 2025-04-15T12:28 (+2)

X-risk: yes. The idea of fast AI development: yes. Knowing the phrase "takeoff speed"? No. For sure, this also depends a bit on the type of role and seniority. "Moral patienthood" strikes me as one of those terms where if someone is interested in one of our jobs, they will likely get the idea, but they might not know the term "moral patienthood". So let's note here that I wrote "language", and you wrote "concepts", and these are not the same. One of the distinctions I care about is that people understand, or can easily come to understand the ideas/concepts. I care less what specific words they use.

Digressing slightly, note that using specific language is a marker for group belonging, and people seem to find pleasure in using in-group language as this signals group belonging, even if there exists standard terms for the concepts. Oxytocin creates internal group belonging and at the same time exclusion towards outsiders. Language can do some of the same.

So yes, it's important to me that people understand certain core concepts. But again, don't overindex on me. I should've maybe clarified the following better in my first comment: I've personally thought that EA/AI safety groups have done a bit too much in-group hiring, so I set out how to figure out how to hire people more widely, and retain the same mission focus regardless.

gergo @ 2025-04-15T12:57 (+3)

Thanks for expanding! I appreciate the distinction between "language" and "concepts" as well as your thoughts on using language for in-group signaling and too much in-group hiring.

gergo @ 2025-04-11T08:55 (+2)

Thanks for sharing this, David!

Patrick Gruban 🔸 @ 2025-04-11T07:28 (+7)

Apart from taking part in programs such as those of Successif and HIP (that have limited slots), I would like to see experienced professionals new to AI Safety team up with young professionals who are more embedded in the community but lack the experience to fundraise for ambitious projects by themselves.

Talking for Successif, we have ramped up our capacity in the last months and are currently admitting a high rate of applicants to our program. I am biased here, but I think our advisors can help individuals think more specifically about how much time to spend on learning what concepts, whether to volunteer or work on projects and when to double down on applying. We're only focused on helping mid-career and senior professionals get into AI risk, and our advisors usually have multiple calls and email exchanges with advisees over several months to always discuss the best next steps.

I broadly agree with the post, but I know from my own experience that it can be hard to decide when to prioritize upskilling, networking, projects, or applications. Some people in our program struggle with imposter syndrome, which can lead to spending too much time learning concepts when this is not their current bottleneck.

SofiaBalderson @ 2025-04-12T08:01 (+5)

This post is gold! I don’t work in AI safety - I’m in animal advocacy community building, but all the tips apply to our cause area as well - I will share with our community! Thank you for sharing and taking the time to write! 

tomrowlands @ 2025-04-15T01:45 (+3)

The good news is that it’s possible to level up your context pretty fast. Based on what professionals have told me, the community is also really open and helpful, so you can have a lot of support if you know where and what to ask.
 

A slight qualifier here is that getting to the level of context required for some jobs - especially senior ones that experienced professionals might be applying to - can take (sometimes much) longer, so it's important to have realistic expectations there. For instance, if you want to work in AI safety, and have a background (e.g. quantitative finance, venture capital) that could give you great skills to be a grantmaker, you'll likely still need to know more than just the high-level concepts and the landscape of organisations working on it; you might need to know the strengths and weaknesses of different theories of change, and have a sense of the wider funding landscape. 

That said, I want to commend this as a really helpful article, Gergő! The suggestions above would still be helpful in the scenario I outline. And FWIW, I'd love to see more experienced professionals in EA, and in AIS in particular.

Caveat: speaking personally here, rather than for my employer Open Phil.

Chris Leong @ 2025-04-10T13:30 (+3)

Great post. I suspect your list of who and what is useful to know about is a bit too large. To give one specific example, I wouldn't suggest that a jobseeker take the time to look up who Guillaume Verdon is. That's not really going to help you.

gergo @ 2025-04-11T08:56 (+2)

Yeah I agree about this case, I will actually take it out!

Eva Lu @ 2025-04-10T19:38 (+2)

Think of the x-axis as the skills needed for a given role. The y-axis refers to the context people have

Is this a typo? On the graphs it looks like the x axis is context and the y axis is skills.

gergo @ 2025-04-11T08:57 (+2)

Ah, right. x) Thanks so much for pointing this out!

Denis @ 2025-04-16T22:14 (+1)

Great post!

As a senior professional who went through the hiring process for EA groups, but also as a senior professional who has hired people (and hires people) both for traditional (profit-driven) organisations and for impact/mission-driven organisations, my only comment would be that this is great advice for any role. 

As hiring managers, we love people who are passionate and curious, and it just feels weird for someone to claim to be passionate about something but not have read up about it or followed what's happening in their field. 

In terms of the job-search within EA, the only detail I would add is that there are a huge number of really nice, friendly, supportive people who give great feedback if you ask. One of my first interviewers did a 1-hour interview, after which he (rightly) did not continue the process. He explained very clearly why and what skills I was missing. He also set up an additional call where he talked through how my skill-set might be most valuable within an impactful role, and some ideas. He gave me lots of connections to people he knew. And so on. And he offered to help if I needed help. 

Within EA, this is the norm. People really respect that someone more senior wants to help make the world a bit better, they want to help. 

 

gergo @ 2025-04-18T13:27 (+3)

Thanks for sharing!

One of my first interviewers did a 1-hour interview, after which he (rightly) did not continue the process. He explained very clearly why and what skills I was missing. He also set up an additional call where he talked through how my skill-set might be most valuable within an impactful role, and some ideas. He gave me lots of connections to people he knew. And so on. And he offered to help if I needed help. 

This is great practice; however, I believe it happens only in a minority of cases. Typically, people who are filtered out early receive an email stating they can't receive individual feedback. Given this, I recommend that if someone makes it far enough to be invited to an interview, ask for feedback at the end of the meeting before concluding. It's better than sending an email after a decision has been made. 

Alternatively, if you encounter the hiring manager at a conference, consider reaching out. However, if the interview was some time ago, don't expect them to remember, as they’ve likely conducted hundreds since then.