The US AI policy landscape: Where to work to have the biggest impact

By 80000_Hours @ 2026-02-10T00:36 (+26)

The US government may be the single most important actor for shaping how AI develops. If you want to improve the trajectory of AI and reduce catastrophic risks, you could have an outsized impact by working on US policy.

But the US policy ecosystem is huge and confusing. And policy shaping AI is made by specific people in specific places — so where you work matters enormously.

This guide aims to help you think about where specifically to work in US AI policy so you can actually make a large impact.

In Part 1, we cover five heuristics for finding the most impactful places to work. In Part 2, we cover five policy institutions that we’d guess are the most impactful for AI and name specific places in each that are especially promising.

If you want to work in US policy, we also recommend the expert-vetted guides at Emerging Tech Policy Careers for practical advice on pathways into government and detailed profiles of key institutions.

Jump to our top recommended institutions

Part 1: How to find the most impactful places to work on AI policy

It’s hard to predict precisely when and where key AI policy decisions will happen, but you can position yourself for greater impact. The following five heuristics can help you judge where you could have the best shot at positively shaping the trajectory of advanced AI.

Prioritise building career capital

Early in your policy career, avoid tunnel vision on AI policy roles. Many entry-level positions worth considering won’t focus on AI. What usually matters more is building career capital — knowledge, context, networks, and credibility that let you navigate the policy world.[1]

For example, you won’t get to specialise on AI risks as an intern in Congress. (You’ll probably spend much of your time answering phones.) But you’ll gain tacit knowledge, networks, and credentials that may accelerate your career more than an AI-focused opportunity lacking these benefits.

Here are some questions to help figure out how much career capital a role might give you:[2]

It can be hard to answer these questions by research alone — when possible, talk to people who’ve worked in or near the places you’re considering.

In short: Get your foot in the door, build relationships, and learn how policy works. Then cash out that career capital to move into more targeted, directly impactful roles.

Work backwards from the most important issues

AI policy is a huge and complex field — here are some ways to break it down:

Inputs to AI developmentAI applicationsPolicy levers
  • Data  
  • Compute 
  • Talent 
  • Investment 
  • Algorithms
  • Military 
  • Science and innovation (e.g. biology, robotics) 
  • Cyber operations 
  • Supply chains 
  • Labor automation 
  • … and many more
  • Testing and evaluation  
  • Industrial policy  
  • Research and Development (R&D) funding  
  • Regulations  
  • Export controls  
  • … and many more

You can mix and match across these columns and end up working on very different things. For example, you might work on R&D funding (investment) for military applications, or on export controls for AI chips. Your impact will depend on which issues you choose to work on and which levers you use.

You might start by asking: Which issues seem most important?[4] Then, work backwards to the policy tools that you think might address them most effectively. For example, if you’re most concerned about:

In practice, issues often overlap, and many policy roles let you pull on several levers at once — or one lever that mitigates several risks. Still, prioritising which AI issues matter most can help you zero in on the levers, and then the roles, best placed to address them.

Find levers of influence

Some policy institutions are far better equipped for work on your AI policy priorities, depending on what formal or informal powers they hold.

Formal powers are legal authorities,[5] like deciding what research and development priorities to fund, regulating individuals or companies, or setting interest rates.

 For example:

Informal powers — like coordination, research and argumentation, and agenda-setting — aren’t enforceable, but they can be as (or more) important than formal ones. Many policy organisations have little budget or regulatory authority but can sway others that do.

For instance, White House offices can’t create new laws, but they can steer agencies toward their priorities and broker compromises among them.[7] Likewise, most think tanks and advocacy organisations don’t have formal powers, but they can influence Congress or the White House if they’re trusted advisors.

Both kinds of power matter. The most impactful institutions usually have one, or both, in abundance.

When you’re looking for impactful places, ask: Does this place control money or relevant rules directly? Or can it reliably influence those that do?[8]

Prepare for ‘policy windows’

Timing matters a lot in policy. Sometimes a single crisis, scientific discovery, or article can catapult an issue onto the policy agenda overnight. Other times, it takes years of building evidence that something is needed before action finally breaks through.

These breakthrough moments are called ‘policy windows.’ It’s hard to predict when they’ll open. A few examples:

Your potential for policy impact can spike when a certain window opens:

For your career, this means:

In AI, there’s also the consideration of how quickly the technology is developing. Many think the most effective time to act is before AI systems get very powerful, which may be quite soon. If you think the most important AI policy decisions will be made in the next 3–5 years, you probably want to prioritise paths that focus on AI earlier while still developing career capital. That might mean:[13]

Consider personal fit 

Policy impact highly depends on your personal fit. If you’re especially well-suited for a particular policy role, you can often achieve vastly greater impact, and poor fit often leads to burnout.

Some traits matter for almost all policy work, like professionalism, humility, initiative, and being able to work with people who hold different views and values. But beyond that, different roles reward very different strengths. For example:

The stereotype of a suit-wearing, cocktail-reception-attending staffer captures only a slice of the policy world. Your most impactful role could be in any of the places we discuss below (or beyond), depending on your skills, interests, and preferred work style.

Part 2: Our best guess at the most impactful places (right now)

Below, we cover five policy institutions and give our best guesses for the most impactful places to work in each.[15]

1. Executive Office of the President

The Executive Office of the President (EOP; aka the White House) is small but mighty. Its ~2,000 staff help implement the president’s agenda and oversee the ~three-million-person executive branch. Spread across over 20 offices, EOP influences everything from the federal budget to national security to science and technology priorities. The leaders of these offices are often the president’s closest advisors, and their guidance — shaped by their staff — can sway decisions at the highest level.

The White House matters for AI policy by:

The White House also has some key institutional constraints. Being so far upstream, the White House doesn’t get much ‘ground-level’ visibility into how policies are developed and carried out. Compared to the whole executive branch, White House offices have very small staffs and budgets, and most largely rely on soft powers to achieve their policy goals.

Here are some key career considerations for working in the White House:

In short, White House roles can be exceptionally impactful — you’re close to the president, shaping government-wide agendas, and often in the room for time-sensitive, pivotal decisions. But they’re also typically short-lived and intense, tied to political cycles, and sometimes only as effective as your ability to rally the much larger machinery of government behind you.

Based on how much they have historically influenced technology policy, their overall levels of soft and hard power, and their potential for building career capital, we’d guess that the following offices would be especially impactful choices for AI policy:

2. Federal departments and agencies

Federal departments and agencies implement policy: they administer social programs, guard nuclear stockpiles, break up monopolies, approve new drug trials, launch satellites, and train the military, among thousands of other things. Most people and money in the US government sit in these departments.

Departments are massive and specialised, with tens or hundreds of thousands of employees spread across dozens of sub-agencies. Fifteen secretaries (the Cabinet) lead the 15 Departments.[24]

Federal departments and agencies can matter for AI policy by:

With their huge scope comes important limitations. Agencies answer to both Congress and the president: Congress sets their missions and budgets through laws, and the White House directs their day-to-day operations and high-level priorities. And as enormous, specialised bureaucracies, departments tend to develop entrenched procedures and risk-averse cultures that can make change slow.

Here are some key career considerations for working in federal agencies:

Our best guess at the five most impactful federal departments for AI policy:

3. Congress

Congress formally holds some of the most important levers in government: setting the federal budget and making laws. This means most big, lasting policy changes need buy-in from Congress.

Congress matters for AI policy by:

Rep. Don Beyer discusses AI risks, via source.

On the flip side, Congress isn't exactly known for its efficiency.

Comic about Congress dysfunctionality, via source.

There’s good reason for this scepticism:

But Congress is easy to underestimate. The more polarised and theatrical something is, the more coverage it tends to get. This means bipartisan policymaking is often underrepresented in the news. (You probably never heard about Congress funding $175 billion to upgrade public water systems in 2020 or raising the tobacco purchasing age from 18 to 21 in 2021).

You’ll need to consider three major structural dynamics when finding roles in Congress:

Impact rules of thumb: All else equal, Senate offices usually matter more than House offices, committees more than personal offices, and the majority more than the minority.[32]

But an office’s culture and your specific role in it matter greatly for your work experience and impact. For instance, some offices are highly hierarchical and top-down; others give junior staff more autonomy in writing legislation, leading meetings, or managing issue portfolios. These dynamics are hard to research, so prioritise talking with current or former staff who can give you a fuller picture.

Many people who thrive elsewhere in government find the Hill uniquely chaotic and political. This means you should think carefully about your fit — but also means that if you are a good fit, you may have an unusual comparative advantage.

Here are some career considerations for working in Congress:

Our best guess at the five most impactful Senate and House committees for AI policy:[35]

Senate committeesHouse committees

4. State governments

State legislatures and executive agencies don’t command headlines as much as Congress, but they often move much faster. Many state legislatures are dominated by a single party, which means fewer veto points and less gridlock. They’re also closer to the communities and industries they govern. And because state staff are usually smaller and thinner on technical expertise, one capable hire can have an outsized influence.

This agility and leverage make states important players in AI policy.[36] For example, California Governor Newsom signed SB 53 into law in September 2025, which introduces frontier AI lab whistleblower protections and safety incident reporting and requires large developers to publish their plans for mitigating catastrophic risks. In January 2026, New York enacted the RAISE Act, which also introduces transparency-focused rules for frontier labs.[37]

States shape AI outcomes on two fronts: locally, within their borders, and nationally, by influencing federal policy and industry behavior.

As with federal policy, those working at the state level will have to choose between policy institutions, such as state legislatures, government agencies or executive offices (like the governor’s office), or state-focused think tanks or advocacy organisations. The tradeoffs between these options often mirror those at the federal level, but each state also has its own quirks that can change the calculus. For instance, it matters whether a state legislature has unified or divided party control, meets year-round or only part of the year, and whether members have their own staff or share them with leadership.

State AI policy also faces a major vulnerability: Congress can often override it. When federal and state laws clash, federal law typically wins, and Congress can sometimes go further by barring states from regulating in an area altogether.[40]

This risk is ever-present: In June 2025, Congress considered a 10-year ban on certain state AI laws.[41] And, as of December 2025, the threat to state legislation is back in the form of an executive order aimed at weakening state-level AI regulations through litigation, conditional federal funding, and by creating a federal framework to preempt state laws.

State AI legislation tracker, updated October 6, 2025 from source.

Here are some key career considerations for working in state-level AI policy:

All else equal, federal policy usually has a higher ceiling for impact. But state roles are often more accessible, easier to land, and — particularly in influential states — could bypass gridlock at the federal level to shape AI trajectories nationally.

Our best guess for the five most impactful states for AI policy:

Within states, we think the highest-impact roles are usually in the legislature, the Governor’s office, or in agencies that implement relevant AI policies.[43]

5. Think tanks and advocacy organizations

Policymakers have little time to think deeply about the range of issues they have to cover. Think tanks can do it for them: they conceive, analyse, and push for ideas, serving as ‘idea factories.’ Advocacy organisations play a similar role, but usually with a sharper ideological edge or a specific mission. The lines between think tanks and advocacy organisations can be blurry in practice, and some policy-focused nonprofits don’t clearly fall in either category.[44]

Think tanks influence policy through several routes:

Advocacy organisations may also use these channels, but generally focus more on lobbying — for instance, meeting with policymakers to push their agenda or mobilising constituents to call their senators about an issue.

The biggest drawback of think tank and advocacy work is distance from actual decision makers. This makes their impact especially ‘lumpy’ — sometimes very high, but generally sporadic and hard to predict. In one think tank staffer’s words:

If we judge [think tanks] by whether they are successful in getting policy implemented, most would probably fail most of the time.

— Andrew Selee, former executive vice president of the Woodrow Wilson Center

If you’re just starting out, you might have some direct policy impact in a think tank, but the bigger payoff is usually career capital. Think tanks let you test whether you enjoy policy work, build skills valued in the policy world, and grow your network. Many junior congressional staffers come from think tanks, where they’ve built early credibility and relationships. And sometimes, junior researchers ‘ride the coattails’ of a senior staffer into a new administration, landing entry-level political roles when their boss gets appointed.

Some think tanks and advocacy organisations that we think could be impactful for AI policy are:[45]

Think tanksAdvocacy orgs

Conclusion

The tl;dr on how to have a big impact in US AI policy: build career capital, work backward from the most important issues, prioritise institutions with meaningful power, stay ready for policy windows with AI ‘timelines’ in mind, and choose roles that fit your strengths.

We think especially promising options include the White House, federal departments like Commerce and Defense, Congress (especially on relevant committees), major state governments like California (assuming state legislation remains feasible), and well-connected think tanks or advocacy organisations. But your personal fit really matters: the ‘best’ place to work in the abstract may not be the best place for you.

Learn more about how and why to pursue a career in US AI policy

Top recommendations

Further reading

Resources from 80,000 Hours

Resources from others

  1. ^

    The ‘explore–exploit’ dilemma is a decision-making concept about balancing two strategies: exploration (trying new options that might lead to better outcomes) and exploitation (sticking with the best option you know so far). Early career stages are usually best spent exploring — collecting tools and experience first, then doubling down where you think you could have the most impact.

  2. ^

    None of these factors should be definitive for your decision, but they’ll all contribute to how much policy-relevant career capital you’ll gain — which is particularly valuable early in your career. You’ll likely need to consider tradeoffs between them: for example, you might develop stronger skills at a less well-known organisation with better mentorship, or you might build a stronger network at a place with less opportunity for skill development.

  3. ^

    Another highly valuable credential is a security clearance, which makes you more competitive for other cleared roles. Note that the strength of a given credential may vary across policy communities — for instance, some AI policy credentials may be well-recognised within tech policy circles but carry less weight in broader Washington.

  4. ^

    These examples are meant to illustrate how you might reason backwards from the AI issues you find most concerning — not to suggest that these are necessarily the most important issues or policy tools overall. We don’t dive into which threats or interventions should be prioritised in this article, but if you want to explore different risk scenarios and policy approaches in more depth, see these articles.

  5. ^

    Legal authorities mainly come from laws passed by Congress or executive orders (directives from the president).

  6. ^

    Renamed the Department of War (DOW) by executive order in September 2025.

  7. ^

    For instance, the White House Office of Science and Technology Policy (OSTP) has convened agencies on AI risk management frameworks and evaluation methods. It didn’t control the funding itself, but by setting the agenda and coordinating across departments, it helped shape how resources were deployed.

  8. ^

    It’s generally straightforward to look up the formal authorities of an office — e.g. what laws, budgets, or regulations it oversees. What’s harder is understanding its soft powers — the influence it has through persuasion, networks, or credibility. Policy roles that give you broad exposure (like congressional staff positions or agency detail assignments) can help you see how the two interact in practice. Even before entering policy, talking with people in the field and following high-quality policy commentary can give you a sense of where informal influence really lies.

  9. ^

    In Washington, people sometimes say an issue is ‘in the water’ — meaning it’s widely circulating in policy conversations, even if it hasn’t yet made it onto the formal agenda.

  10. ^

    Partisan signals can range from big, obvious choices — like interning in a congressional office or donating to a campaign — to smaller ones, such as registering with a party or joining a partisan student group. These signals can open doors within that party but may limit opportunities with the other side, especially in highly vetted roles. The right approach depends on your own beliefs, comfort with partisan affiliation, and long-term goals.

  11. ^

    Civil service roles are formally nonpartisan: you’re hired to serve any administration, regardless of your politics. In practice, agencies have distinct cultures — for example, the Department of State and the Environmental Protection Agency are often seen as more left-leaning, while the Department of Homeland Security and the FBI are seen as more conservative. This contrasts with political appointments, where partisanship is explicit and often vetted for — appointees are chosen specifically to advance an administration’s priorities.

  12. ^

    Political appointees are temporary positions that aren’t typically a first career move unless you’re unusually connected.

  13. ^

    We think the most pivotal policy moments will likely be when systems powerful enough to lock in certain futures are first deployed. Unless you have very short timelines (e.g. you think AGI is fairly likely to be here by 2027 or earlier) and high confidence, it’s often still worth investing in (some) career capital for policy work. The field is still small, and demand for capable people is growing rapidly. Many areas remain neglected, new institutions keep appearing, and there aren’t enough people with both technical and policy expertise. In practice, people more often overestimate how long it takes to become useful in AI policy than the reverse.

  14. ^

    We generally think PhDs aren’t worth the opportunity cost for people interested in policy careers. They can take 6+ years, and the payoff is usually limited outside academia or certain technical niches. That said, a PhD might make sense if you’ve already started, if you do one in the UK (where they take less time), if you’re targeting specific executive branch jobs where PhDs carry weight, or if you’re aiming at research-heavy roles outside US policy.

  15. ^

    These are our best guesses at the time of writing, based on assumptions about which areas of AI policy are likely to matter most. Shifts in politics, the public, or in AI development trajectories could change which institutions are most impactful. Depending on your skills, the highest-impact option might be outside this list entirely — for example, working as a journalist or public writer, at an international organisation, running for office, or working in another policy-relevant role where your background is unusually valuable.

  16. ^

    Even when they can’t directly set policy, presidents bargain constantly — with Congress, agencies, foreign leaders, and the public — to get buy-in for their agendas. As political scientist Richard Neustadt argued: “Presidential power is the power to persuade.”

  17. ^

    Sometimes the president’s proposal is adopted almost wholesale. The tax cuts and spending priorities Trump proposed in spring 2025 became the central framework for Congress’ budget (the One Big Beautiful Bill).

  18. ^

    (The US currently has 48 national emergencies in effect.)

  19. ^

    You must be a US citizen to receive a security clearance. See this guide for a breakdown of US policy roles available to foreign nationals.

  20. ^

    The share of political appointees in the White House is far higher than in federal agencies. Nominees for these positions — about 225 within the EOP and roughly 4,000 across the executive branch — typically have strong networks connecting them to the administration and undergo political vetting. Appointees in the most senior of these roles must also be confirmed by the Senate.

  21. ^

    Career civil service roles are legally required to be nonpartisan, with hiring and advancement based on merit rather than political affiliation. Some analysts argue that recent Trump administration policies introduce elements of political vetting into the hiring processes for these roles.

  22. ^

    The importance of elite credentials varies significantly by administration. Under President Trump’s first term, for example, only 21% of mid- or senior-level staffers had Ivy League degrees.

  23. ^

    Hear firsthand accounts of White House staffers here, here, and here.

  24. ^

    They are: Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Justice, Labor, State, Transportation, Treasury, and Veterans Affairs. Outside of these departments, there are also many independent agencies — like the Environmental Protection Agency (EPA), NASA, and the Federal Reserve.

  25. ^

    See budget documents on the National AI Research Institutes program, Department of Energy FY 2024 Budget in Brief, and Department of Defense FY 2024 budget request for uncrewed and autonomous systems. Federal agencies also shape markets through their spending. For some key industries, the government is the largest or only customer. Here, agencies’ decisions on which systems or products to buy (like fighter jets or intelligence software) can make ‘winners and losers’ and shape industry behavior. While this influence may hold less for AI than other industries, it still could matter considerably in areas like defense AI, cybersecurity, and government cloud services where federal contracts can determine market leaders.

  26. ^

    For cleared roles, aim to apply at least nine months before your target start date.

  27. ^

    We’re cheating a bit here — the intelligence community isn’t technically a department but rather a network of 18 agencies and military services across the executive branch. They’re responsible for collecting, analysing, and delivering intelligence to senior US leaders to inform national security decisions.

  28. ^

    In practice, presidents often test the boundaries of their powers. For example, while only Congress can declare war, presidents have repeatedly launched major military actions without formal declarations, citing their role as Commander-in-Chief or relying on broad congressional Authorizations for Use of Military Force (AUMFs). They’ve also used executive orders to create new offices or shift responsibilities within agencies — moves that reshape how the federal government operates even though they can’t legally create or abolish agencies without Congress. This gradual expansion of presidential authority has led some scholars to describe the modern presidency as ‘imperial.’

  29. ^

    This is especially true in the House, where members are up for reelection every two years (and may spend huge portions of their time fundraising or campaigning). Some factors — like being in an electorally ‘unsafe’ district or having constituents who care a lot about a particular issue — can intensify these electoral pressures.

  30. ^

    Some describe this as the divide between ‘show horses,’ who focus on visibility and messaging, and ‘workhorses,’ who quietly draft and pass substantive legislation. One congressional staffer noted the shift toward the former: “In Congress today, we have a sea of show horses, all cultivating their public personas, polishing off their Twitter chops, doing things to capture the conservative or progressive zeitgeist of the moment, the more outrageous the better … The bread-and-butter of the legislative process, constructing complicated deals among competing special interests, crafting agreements among industries and setting the rules of the road for economic progress, has been derailed by intense political partisanship.”

  31. ^

    Journalist Robert Kaiser observed that committee staffers can have more influence on the substance of legislation than committee members themselves.

  32. ^

    ‘All else equal’ does a lot of work here. An influential member can outweigh an entire committee, and a highly effective House member could make a much bigger impact than some senators. Where you’ll have the most impact depends heavily on your own fit, timing, and the specific opportunities on offer.

  33. ^

    Congress can be surprisingly insular. Members are hard to pin down without strong relationships, and staffers are often overextended and hard to reach. Unless you’re directly on the Hill, have a well-cultivated network, or work in an advocacy group whose job is engaging Congress, it can be difficult to exert influence here in a focused way.

  34. ^

    This assumes that Congress is paying your salary: some congressional fellowships like Horizon, AAAS, and TechCongress pay fellows’ salaries during their placement.

  35. ^

    We list committees rather than individual members because members can turn over after their term, and committee work is often more impactful. That said, working for the offices of certain individual members can be more impactful than working for the committees themselves. One way to choose which member to work for is to look at which committees they serve on.

  36. ^

    See which states proposed which AI policies in 2025 here.

  37. ^

    See more in-depth discussions of the differences between SB 53 and RAISE Act here and here.

  38. ^

    California’s Transparency in Frontier Artificial Intelligence Act (SB 53) is the nation’s first major state-level AI safety law. It combines transparency rules, a new public research cluster, whistleblower protections, and recommended annual updates — aiming to set a model for federal legislation in a space where Congress has yet to act.

  39. ^

    The ‘California Effect’ is the best-known example: California’s tougher vehicle emissions standards in the 1960s, and more recently its AI safety legislation, have prompted companies to follow those rules nationwide and inspired action in other states and in Congress.

  40. ^

    This is called preemption. The Constitution’s Supremacy Clause makes federal law the “supreme law of the land,” but Congress can only preempt in areas where it has constitutional authority — usually things that cross state lines, like immigration, foreign affairs, interstate commerce, or national defense.

  41. ^

    In May 2025, the House passed a budget bill that included a 10-year ban (a “moratorium”) on state AI laws. When the bill reached the Senate, Senators proposed a narrower version that tied about $500 million in federal broadband funds to states agreeing not to pass new AI regulations. The Senate ultimately rejected that approach in a 99–1 vote, removing the moratorium entirely.

  42. ^

    States vary widely in how their legislatures operate. ‘Professional’ legislatures like California’s or New York’s meet year-round and employ permanent staff, while ‘citizen’ or part-time legislatures (such as Texas or Montana) often convene only part of the year and rely heavily on temporary or ‘session-only’ staff.

  43. ^

    For example, California’s SB 53 requires AI labs to report major incidents to the California Office of Emergency Services.

  44. ^

    For example, the Heritage Foundation (right-leaning) and the Center for American Progress (left-leaning) ride this line: they publish policy analysis but also actively campaign for preferred outcomes. Legally, the distinction often comes down to tax status. Most think tanks are 501(c)(3) nonprofits — educational and charitable organisations that can receive tax-deductible donations but have strict limits on lobbying and election-related work. 501(c)(4) ‘social welfare’ groups, by contrast, can spend more time directly lobbying or supporting specific policies and politicians, and their donations aren’t tax-deductible. Many groups operate both arms — a (c)(3) for research and education and a (c)(4) for lobbying.

  45. ^

    We don’t go into detail about these think tanks here, but if you are weighing think tanks to pursue and want more depth, apply to speak with us – we may be able to put you in touch with experts who will have up to date views on the field. Note that in many ways, think tanks are the hardest of the five policy institutions to predict impact for: they lack formal authority, and their influence can shift dramatically with political conditions, funding, and relationships. Also, some policy-adjacent organisations don’t fit neatly into either ‘think tank’ or ‘advocacy’ categories — for example, Model Evaluation and Threat Research (METR) and Epoch AI focus on policy-relevant technical evaluations and forecasting. Similarly, organisations like the Centre for the Governance of AI (GovAI), the Institute for AI Policy and Strategy (IAPS), and the Institute for Law & AI (LawAI) aren’t traditional DC think tanks, but they produce policy-relevant research and cultivate talent.


SummaryBot @ 2026-02-11T17:30 (+2)

Executive summary: This guide argues that the US government is a pivotal actor in shaping advanced AI and outlines heuristics and specific institutions — especially the White House, key federal agencies, Congress, major states, and influential think tanks — where working could plausibly yield outsized impact on reducing catastrophic AI risks, depending heavily on timing and personal fit.

Key points:

  1. The authors propose five heuristics for impact: build career capital early, work backward from the most important AI issues, prioritize institutions with meaningful formal or informal power, prepare for unpredictable “policy windows,” and choose roles that fit your strengths.
  2. They argue that early-career professionals should avoid narrow AI specialization if it sacrifices networks, tacit knowledge, credentials, and broadly valued policy skills.
  3. The guide suggests reasoning from specific AI risk concerns (e.g., catastrophic misuse, geopolitical conflict, AI takeover) to particular policy levers such as liability rules, export controls, safety evaluations, and R&D funding.
  4. The Executive Office of the President is presented as especially impactful because of its agenda-setting power, budget proposals, foreign policy authority, and ability to act quickly in crises, despite institutional constraints and political turnover.
  5. Federal agencies, Congress (especially key committees and majority-party roles), and major states like California are described as powerful because they control budgets, implement and interpret laws, regulate industry, and can set de facto national standards.
  6. Think tanks and advocacy organizations are portrayed as influential through research, narrative-shaping, lobbying, and talent pipelines into government, though their policy impact is characterized as “lumpy” and less predictable.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Tony Rost @ 2026-02-10T23:26 (+1)

This is a great guide and I appreciate the work.  Thank you!

Please consider adding the bans on AI consciousness/sentience/self-awareness.  There are several laws in flight: https://harderproblem.fund/legislation/

The issue here is that these laws go beyond liability topics with personhood, and instead could choke future discussions about potential welfare issues.  They simply go too far, and are too sticky.