Aptitudes for AI governance work

By Sam Clarke @ 2023-06-13T13:54 (+67)

I outline 8 “aptitudes” for AI governance work. For each, I give examples of existing work that draws on the aptitude, and a more detailed breakdown of the skills I think are useful for excelling at the aptitude. 

How this might be helpful:

Epistemic status:

Some AI governance-relevant aptitudes

Macrostrategy

What this is: investigating foundational topics that bear on more applied or concrete AI governance questions. Some key characteristics of this kind of work include:

Examples:

Useful skills:

Interlude: skills that are useful across many aptitudes

Under each aptitude, this post lays out skills that seem useful for excelling at it. But there are some skills that are useful across many aptitudes. To avoid repetition, I'm going to list those here.

Useful skills for all the aptitudes

Useful skills for research aptitudes

Policy development

What this is: taking "intermediate goals" (e.g. "improve coordination between frontier AI labs") and developing concrete[3] proposals for achieving them. Some key characteristics of this kind of work include:

Examples:

Useful skills:

Well-scoped research

What this is: answering well-scoped questions[4] that are useful for AI governance.

Examples:

Useful skills:

Distillation

What this is: clarifying ideas, working out how best to present them, and writing them up (rather than coming up with new ideas).

Examples:

Useful skills:

Public comms

What this is: communicating about AI issues (to e.g. ML researchers, AGI labs, policymakers, the public) to foster an epistemic environment that favours good outcomes (e.g. more people believe AI could be really dangerous).

Examples:

Useful skills:

Political and bureaucratic aptitudes

What this is: advancing into some high-leverage role within (or adjacent to) a key government, AI lab or other institution, from which you can help it to make decisions that lead to good outcomes from advanced AI.

Examples:

Useful skills:

Management and mentorship

What this is: directing and coordinating people to do useful work, and enabling them to become excellent.[5]

Examples:

Useful skills:

Caveats

Thanks to Charlotte Siegmann and Rose Hadshar for comments and conversations, and to Ben Garfinkel for guidance and feedback.

  1. ^

    Note that it’s normally important to be able to move back and forth between different levels of abstraction. Otherwise, a pitfall of this kind of work is either getting lost in Abstraction Land, or not being sufficiently attentive to relevant empirical facts.

  2. ^

    By 'intermediate goal', I mean a goal for improving the lasting impacts of AI that’s more specific and directly actionable than a high-level goal like ‘reduce risk of power-seeking AI’ but is less specific and directly actionable than a particular intervention. E.g. something like ‘improve coordination between frontier AI labs’.

  3. ^

    Eventually, proposals need to be very concrete, e.g. “[this office] should use [this authority] to put in place [this regulation] which will have [these technical details]. And they’re not going to want to do it for [these reasons]. [This set of facts] will be adequate to convince them to do it anyway.” Normally there will be intermediate work that isn’t as concrete as this.

  4. ^

    By ‘well-scoped questions’, I mean ones which don’t require further clarification and have a fairly clear methodology which can be applied to answer them.

  5. ^

    Management and mentorship seem like somewhat different skillsets to me—in particular, it seems possible to be excellent at mentorship but not at other aspects of management—but they blur into each other enough that I've grouped them.