Long-term AI policy strategy research and implementation

By Benjamin_Todd @ 2021-11-09T00:00 (+1)

This is a linkpost to https://80000hours.org/career-reviews/ai-policy-and-strategy/

In a nutshell: Advancing AI technology could have both huge upsides and huge downsides, including potentially catastrophic risks. To manage these risks, we need people making sure the deployment of AI goes well, by thinking about how to:

Recommended

If you are well suited to this career, it may be the best way for you to have a social impact.

Review status

Based on a medium-depth investigation 
 

Why might working to improve AI policy be high impact?

As we’ve argued, in the next few decades, we might see the development of powerful machine learning algorithms with the potential to transform society. This could have major upsides and downsides, including the possibility of catastrophic risks.

To manage these risks, we need technical research into the design of safe AI systems (including the ‘alignment problem’), which we cover in a separate career review.

But in addition to solving the technical problems, there are many other important questions to address. These can be roughly categorised into three key challenges of transformative AI strategy:

We need a community of experts who understand the intersection of modern AI systems and policy, and work together to mitigate long-term risks and ensure humanity reaps the benefits of advanced AI.

What does this path involve?

Experts in AI policy strategy would broadly carry out two overlapping activities:

  1. Research — to develop strategy and policy proposals.
  2. Implementation — working together to put policy into practice.

We see these activities as equally important as the technical ones, but currently they are more neglected. Many of the top academic centres and AI companies have started to hire researchers working on technical AI safety, and there’s perhaps a community of 20–50 full-time researchers focused on the issue. However, there are only a handful of researchers focused on strategic issues or working in AI policy with a long-term perspective.

Note that there is already a significant amount of work being done on nearer-term issues in AI policy, such as the regulation of self-driving cars. What’s neglected is work on issues that are likely to arise as AI systems become substantially more powerful than those in existence today — so-called ‘transformative AI‘ — such as the three non-technical challenges outlined above.

Some examples of top AI policy jobs to work towards include the following, which fit a variety of skill types:

Examples of people pursuing this path

Helen Toner

Helen worked in consulting before getting a research job at GiveWell and then Open Philanthropy. From there, she explored a couple of different cause areas, and eventually moved to Beijing to learn about the intersection of China and AI. When the Center for Security and Emerging Technology (CSET) was founded, she was recruited to help build the organisation. CSET has since become a leading think tank in Washington on the intersection of emerging technology and national security.
LEARN MORE

Ben Garfinkel

Ben graduated from Yale in 2016, where he majored in physics, math, and philosophy. After graduating, Ben became a researcher at the Centre for Effective Altruism and then moved to the Centre for the Governance of AI (GovAI) at the University of Oxford’s Future of Humanity Institute (now part of the Centre for Effective Altruism). He’s now the acting director there. As of December 2021, GovAI is hiring.
LEARN MORE

How to assess your fit

If you can succeed in this area, then you have the opportunity to make a significant contribution to what might well be the most important issue of the next century.

To be impactful in this path, a key question is whether you have a reasonable chance of getting some of the top jobs listed earlier.

The government and political positions require people with a well-rounded skillset, the ability to meet lots of people and maintain relationships, and the patience to work with a slow-moving bureaucracy. It’s also ideal if you’re a US citizen (which may be necessary to get security clearance), and don’t have an unconventional past that could create problems if you choose to work in politically sensitive roles.

The more research-focused positions would typically require the ability to get into a top 10 graduate school in a relevant area, and deep interest in the issues. For instance, when you read about the issues, do you get ideas for new approaches to them? Read more about predicting fit in research.

In addition, you should only enter this path if you’re convinced of the importance of long-term AI safety. This path also requires making controversial decisions under huge uncertainty, so it’s important to have excellent judgement, caution, and a willingness to work with others — or it would be easy to have an unintended negative impact. This is hard to judge, but you can get some information early on by seeing how well you work with others in the field.

How to enter this field

In the first few years of this path, you’d focus on learning about the issues and how government works, meeting key people in the field, and doing research, rather than pushing for a specific proposal. AI policy and strategy is a deeply complicated area, and it’s easy to make things worse by accident (e.g. see the Unilateralist’s Curse).

Some common early career steps include:

This field is at a very early stage of development, which creates a number of challenges. For one, the key questions have not been formalised, which creates a need for ‘disentanglement research‘ to enable other researchers to get traction. For another, there is a lack of mentors and positions, which can make it hard for people to break into the area.

Until recently, it’s been very hard to enter this path as a researcher unless you’re able to become one of the top (approximately) 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. On the other hand, the field is still small enough for top researchers to make an especially big contribution by doing field-founding research.

If you’re not able to land a research position now, then you can either (i) continue to build up expertise and contribute to research when the field is more developed, or (ii) focus more on the policy positions, which could absorb hundreds of people.

Most of the first steps on this path also offer widely useful career capital. For instance, depending on the sub-area you start in, you could often switch into other areas of policy, the application of AI to other social problems, operations, or earning to give. So, the risks of starting down this path, if you decide to switch later, are not too high.

Recommended organisations