Some talent needs in AI governance

By Sam Clarke @ 2023-06-13T13:53 (+132)

I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.

How this post could be helpful:

Key takeaways

I talked with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they’re looking for. Those hiring needs can be summarised as follows:

Method

Findings

Talent needs

I report on the kinds of work that people I interviewed are looking to hire for, and outline some useful skills for doing this work.

Note: when I say things like “organisation X is interested in hiring people to do such-and-such,” this doesn’t imply that they are definitely soon going to be hiring for exactly these roles. It should instead be read as a claim about the broad kind of talent they are likely to be looking for when they next open a hiring round.

AI governance research organisations

GovAI is particularly interested in hiring researchers to develop and execute on some valuable research agenda, who can operate with a high degree of autonomy.

Rethink Priorities (AI Governance & Strategy team) is also interested in hiring researchers to develop and execute on some valuable research agenda, including but not limited to policy development work.

Some useful skills for research

Skills that these organisations are looking for in their researcher hiring include:[2]

Some useful skills for policy development research

Along with the skills in the preceding subsection, the following skills are useful for policy development research, specifically.

Policy think tanks

Some relevant policy think tanks are interested in hiring policy development researchers to figure out what policy actions key governments should take to make AI go well; to translate that into a concrete[3] plan for implementing those policy actions; and to kick off the implementation of that plan.

Some useful skills for government-facing AI policy development work

Along with the skills for policy development research mentioned above, the following skills are useful for doing more government-facing AI policy development work.

AI labs

Some governance teams at relevant AI labs are interested in hiring two kinds of profiles:

1) Policy development researchers to figure out what policy actions the lab should take, and translate that into a concrete plan for implementing those policy actions.

(Useful skills for this kind of work are covered above.)

2) People to do stakeholder managementconsensus building and internal education within the lab, to help with the implementation of policy actions.

Some useful skills for stakeholder management work

Technical work for AI governance

I also talked with two people with relevant expertise about technical work in AI governance. Some potentially useful information from those conversations:

Some areas of improvement for junior researchers

Some people hiring in AI governance mentioned areas where junior researchers tend to be less skilled. I summarise these findings. They should be treated as anecdotal evidence, and will only apply to some people.

Thanks to the people I interviewed as part of this project; to Kuhan Jeyapragasan for feedback; to Ben Garfinkel for feedback and research guidance; and to Stephanie Hall for support.

  1. ^

    This kind of tactical implementation analysis requires detailed understanding of how policymaking works within the relevant institution.

  2. ^

    NB this list of skills, and the ones which follow in subsequent sections, aren’t necessarily endorsed by people hiring at the organisations in question. (Though the lists were informed by the interviews I conducted.)

  3. ^

    To give a sense for the level of concreteness that’s desired here, it would be something like: “[this office] should use [this authority] to put in place [this regulation] which will have [these technical details]. [These things] could go wrong, and [these strategies should adequately mitigate those downsides]. ”

  4. ^

    “Decentralised AI training” refers to AI training runs that are distributed over many smaller compute clusters, rather than a single large compute cluster.


weeatquince @ 2023-06-30T09:10 (+6)

Curious if you have a sense of the geographic scope of these needs / talent gaps?

(Policy development work can be extremely country dependent. The same person could be highly qualified to do this work in the UK and highly underqualified to do it for the US, or India, or China or Finland, etc).

Thanks :-)

Angélina @ 2023-07-02T19:33 (+5)

Great post thanks!
I'm not sure to what extent you intended to differentiate AI governance from AI policy when writing this post. It seems to me that the AI safety community tends to underestimate the importance of directly engaging within more official institutions to do policy work. These may have small teams working on AI policy, but their capacity for action is considerable, given the relatively new field of GPAI governance (e.g OECD, governements). This contrasts with conducting research within the mentioned organizations (or in other words 100% “EA aligned” organisations). It appears to me that doing "AI policy implementation" can eventually have a larger direct impact, particularly under short timelines, as compared to AI governance research role.

JP Addison @ 2023-06-16T23:17 (+4)

This seems excellent to me. Very timely, very well done.

JP Addison @ 2023-06-29T14:19 (+3)

Revisiting on the occasion of curation: AI governance seems like a high growth area right now, where a bunch of people should consider getting involved. For those who are considering it, this seems like gold dust in evaluating their fit.

It's well-written, and I find its points compelling. For example, your bullet on "Appropriately weighing evidence" is a really well-said description of a point I've only vaguely gestured at before, and in all the descriptions of epistemics I've read, have not seen so well put.

Sam Clarke @ 2023-06-19T14:20 (+2)

Thanks JP!

Matt Boulton @ 2023-07-01T08:38 (+2)

Great post, and extremely timely.

I'm currently earning to give in the technology sector, working in content strategy and corporate communications, but am looking to shift into an industry role (like AI governance) where I can have more direct impact. So, it's encouraging to see these organisations emphasise the need for writing, abstraction, and stakeholder management.

Though I'm not a researcher, I'm confident that these types of organisations will require more dedicated corporate communicators and content-centric types – especially those who can distill complex problems, topics, and ideas for wider public consumption. Did anybody touch on this during your interviews?

I work remotely, so similar to weeatquince's question, I'm also curious where these organisations generally hire, and if they adopt remote-work arrangements. (Even when it's logistically possible, some places are resistant to it, hence the question.)

Thanks for spending the time putting this together, Sam!

AI Law @ 2024-02-02T16:43 (+1)

This was a super interesting read.

One of the major failures I often see, working in policy, is also a lack of actual real-world experience. There are a huge number of upsides from the Undergrad to PhD to Academic pipeline, but one downside is that many people who have never actually worked in-industry or in any non-academic role have very little idea of just how much the 'coalface' differs from what is written on paper, or just how cumbersome even minor policy shifts can be.

I judged an AI regulation/policy contest last year and my number one piece of feedback to people was that they hadn't considered the human element of the 'end-users' of policy. For example can the people/orgs this new regulation or governance impacts actually not only understand what the regulations want from them, but how they can demonstrate compliance - and can they even comply? Not all orgs are impacted equally.

I agree then that your pointers towards stakeholder management and social skills are very important, as is seemingly irrelevant experience working outside of research. One of the best policy researchers I know used to work in a warehouse, and that knowledge of complex socio-logistic environments within large organisations helps him tremendously, even though on paper that was an irrelevant role.

Jamie Bernardi @ 2023-09-05T15:25 (+1)

I revisit this post from time to time, and had a new thought!

Did you consider at the time talent needs in the civil service & US congress? If so, would you consider these differently now?

This might just be the same as "doing policy implementation", and would therefore be quite similar to Angelina's comment. My question is inspired by the rapid growth in interest in AI regulation in the UK & US governments since this post, which led me to consider potential talent needs on those teams.

monadica @ 2023-07-01T13:49 (+1)

I agree, it seems like compute governance specifically needs interdisciplinary knowledge on a spectrum of fields. One area of improvement might be co-design labs, designing software and hardware along with policy. I am wondering about the high-level aptitudes of someone who does  work on Compute Governance

SebastianSchmidt @ 2023-06-29T14:35 (+1)

Thanks for this! How many people did you interview for this?