Managing risk in the EA policy space

By weeatquince @ 2019-12-09T13:32 (+89)


Summary

There are risks to pushing out policy ideas: bad policy that harms society, reputational risks, and information hazards.

All of these risks are situation dependent. For example if talking to a policy maker who is already aware of an information hazard, the risk mitigation actions are very different. None of these risks necessarily imply that policy suggestions should be small scale.

Developing good policy is difficult and complicated. But unless you have a platform or are starting a campaign the risks of making policy suggestions are low.

I suggest that:



1. Introduction

This piece is a brief introduction to thinking about risks and policy influencing for anyone in the EA community space.

I am writing this as I have seen some poor thinking related to policy. I have seen:

I have also seen the exact opposite:

I certainly do not have all the answers in this space. This post is more of braindump of opinions (and anecdotes) and should be taken as such. I do however believe it is worth thinking about this topic in more fine-grained detail than I have seen happen to date. This post is to open discussions not close them down and I welcome disagreement.

Please note:



2. The situation

Background to promoting policies

The following four key points are useful to consider when thinking about promoting policy:

  1. No one is listening unless you have a platform, money, political support, or contacts or a reason for them to listen. Getting senior figures to take policy suggestions seriously is unlikely to happen by accident.
  2. Everyone is shouting. It is easy to forget the vast amount of other people asking for policy change. There are probably about a dozen groups in the UK asking for more long-term policy making, maybe a dozen more shouting on AI policy, and countless other corporate lobbyists and individual actors trying to talk to decision makers on these topics.
  3. Some of the shouty people are crazy, and the crazy arguments are on all sides and come from all sources, including serious academics. Non-sequitur sign outside Parliament:

Yet at the same time it is important to remember.

  1. Developing good policy is difficult and complicated. Good policy ideas are rarely developed independently of existing best practice or policy experience. Personally I found my views on a topic would change drastically over the course of a year or two working in a policy. Even if you can come up with a great sounding policy solution without talking to anyone in that policy area, chance is the idea is terrible. (Similarly if you come up with a great sounding counterargument against following best practice, chance is your counter is terrible.)

What do people in the EA community get wrong?

WHAT I HAVE HEARD:

EA folk suggesting poorly considered policy ideas or plans. For example planning to lobby for more bednets (see Annex A for reasons why this might not be good) or focusing on voting as a way of improving decision making (see comment thread here against this). To a lesser degree I have also seen EAs ignoring the value of expertise and existing resources. For example trying to research how lobbying works from first principles so as to make useful tools for EAs.

WHAT MAY BE GOING WRONG:

People may be taking a very naive view of policy-making, assuming it is easy. Even if policy suggestions developed in this way are not implemented, people could well waste time, funds and resources and could damage reputations.

I RECOMMEND

Assume that developing good policy is difficult. Do not assume conclusions reached about how to do philanthropy well also apply to policy. Talk to experts: policy makers or campaigners (ideally with topic / country relevant experience).



3. What are the risks?

There are a number of risks to suggesting, campaigning for or engaging with policy.

1. Risks of bad policy. This could cause harm in a number of ways:

2. Reputational risks. This includes:

3. Information hazards. This mostly applies to policy that impacts on global security. (More on this here).

There is also some personal risk that pushing a policy change may lead to actors with vested interests taking action to discredit or undermine you as well as your projects.

On the other hand there are risks of inaction too. Bad policy can also arise as a consequence of the useful expert individuals failing to advise and influence.

What do people in the EA community get wrong?

WHAT I HAVE HEARD:

People dismissing a policy idea because it has an obvious yet non-crucial downside. Eg (hypothetical) not supporting a land value tax because it will inevitably burden pensioners.

People pushing for policies that rest on an uncertain crucial consideration. Eg pushing for slower growing breeds of broiler chickens may significantly increase or decrease chicken welfare. Eg pushing for more immigration for the tech sector may help spread good values or dangerous technologies.

WHAT MAY BE GOING WRONG:

People are overly concerned about non-crucial considerations: It is important to remember that nearly all policy, even well-designed policy, will lead to some people losing out, have some unintended consequence and some risk of failure. This is expected. For other policies the devil is in the details and so the expected impact depends on how it is implemented. There is a difference between reasonable negative consequences and the genuinely concerning situation where the sign of the impact of a policy is dependent on a high uncertain crucial consideration. It is also possible (I am unsure) that people are not concerned enough about crucial considerations.

I RECOMMEND

Focus more on if a policy is well designed and evidenced (based on an existing precedent, best practice, collated views of experts, etc) rather than on possible non-crucial downsides.



4. What factors make a policy low risk?

Risks are mostly dependent on context and action. Most of the risks listed are not inherent to a policy suggestion but dependent on who you are talking about it too and the context surrounding it. The exact same policy suggestion might be high risk in one context and low risk in another context. Key contextual factors are:

  1. Framing. How is a policy being suggested? A strong push for a specific policy is riskier than including that policy in a shared braindump of potentially sensible ideas.
  2. Expertise of the system. What is the country’s ability to deliver a policy of this type well? In the UK we are good at getting regulation policy right but poor at getting tax policy right – so suggesting a regulation policy is less risky.
  3. Expertise / knowledge of the individual. Who are the policies are being suggested to, and will this individual respond sensibly to the suggestions?
  4. Political messaging. A right wing idea suggested by a left wing politician has less risk being politicised than if it was suggested by a right wing politician. An idea suggested at an election time has more risk of being politicised.
  5. Other supporting actors. Adding a voice to a campaign with many actors is less risky than fronting a campaign.
  6. Vested interests. Who would be against a policy change, what is the relationship like with these groups, how might they respond and how powerful are they?
  7. Ongoing relationship. Will you work with the policy-maker(s) going forward?
  8. Urgency. If a policy is needed urgently then it might be high risk to not suggest it!

Inherent risks. That said there are some risks that are inherent to the details of the policy being suggested.

  1. Amount of evidence. A policy coupled with a lot of evidence, a credible expert, or support of other stakeholders may be lower risk to push for.
  2. Newness. Innovation is risky. Following existing best practice or a policy done in another country is less risky. A Future Generations Commissioner for UK is not risky as an existing precedent in Wales is going well.

What do people in the EA community get wrong?

WHAT I HAVE HEARD:

People repeatedly asking for “low risk policies” in a manner that implies it is clear what that means

WHAT MAY BE GOING WRONG

I think people are connecting low risk to small scale or low social impact and not considering the extent to which risks are context dependent. At the extremes small policies may be lower risk (eg scrapping capitalism is high risk) but on the whole is not a useful way of thinking. For example, if you have evidence that longer periods between elections lead to better policy making that would be a large structural shift but a low risk change to ask for. In advocacy policies need to be of an appropriate scale for the actor you are talking to, so only aiming for low-scale policies hampers campaigners’ ability to push for change.

I RECOMMEND

Rather than seeking “low risk” policy asks EA folk should be thinking about how policies line up on the factors set out above and should recognise that the risk is mostly due to how a policy is pushed for.



5. Suggestions for folk in the EA community

If you are actively campaigning for a policy or have politicians trying to speak to you or then you should probably be considering the risks in depth (set out in Section 6 below). But otherwise…

Researchers & academics: make policy suggestions

  1. Make policy suggestions. It is useful for EA’s in policy to hear what you think. It is useful for the world if academic work relates to real world decisions. Your ideas are quite possibly terrible, but still worth airing and if you are particularly worried then couch your suggestions in uncertain terms. It is also helpful to be explicit about the assumptions you make.
  2. Talk to policy experts. If you want your policy suggestions to be good at some point in the process try to talk to somebody in that area of policy or a relevantly similar area or policy. (In the UK you can directly email the relevant government department or get in touch with the London EA policy community via policy@ealondon.com ). Also understanding policy could help you take informed action when useful to others.
  3. Don’t stress about the risks. Unless you have a platform or reason to think your research will be picked up on, or plan to go into politics, then most likely no one is listening. Be willing to consider supporting cautious policy actions by signing letters or attending meetings. (And organisations should consider if they would take action as an organisation). Information hazards might still be risks but maybe less than you expect (I do not have strong views on this).

Donors & funders: Fund policy interventions, drawing expertise from across the EA community

  1. Do not ignore policy and accept some risks. If the EA wants to create change then it needs to engage with policy. Accept that risks are not always manageable to zero for good policy campaigns and be aware where that in some cases there is a risk of inaction and missed opportunities. Do not worry about reputational risks to yourself too much, especially for funding small EA policy projects where such risks will be minimal (funding political actors / parties directly poses a greater reputational risk).
  2. Draw on the expertise of others. There is a lack of diverse in-house policy expertise among EA funding organisations. This can be hard to address as expertise is often very country or topic dependent. (As far as I can tell) some funders have refused to fund policy because they do not understand it and others have funded policy interventions without any clear cause prioritisation based on what expertise happens to be available in house. Yet lots of people in the EA community have some policy experience and could provide feedback on a funding decision, and engaging others helps mitigate the unilateralists curse. (If you do not know who to talk to contact me at policy@ealondon.com).
  3. Allow time for policy change and do not expect rapid feedback. Feedback loops in policy are slow. There are ways of tracking gradual changes (see work on Useful Theory of Change) but policy projects may need funding for a few years to demonstrate impact.

Funders may also want to note that as there are limited experienced EA campaigners and projects may be competing more for staff than for funding. There are of course excellent non-EA groups focused on EA-esque policy topics such as global poverty (eg: Results), animal welfare (eg: CiWF, A-law), improving institutions (eg. IfG), arms control (eg BASIC), etc.

What do people in the EA community get wrong?

WHAT I HAVE NOT HEARD

Any good answers for what to do to make the world better if you have a platform or some amount of political power

WHAT MAY BE GOING WRONG

I would hesitantly suggest that there is not yet enough rigorous thinking and about what policy actions are most valuable to support within the policy space. For example EA cause prioritisation was originally done from the perspective of a medium donor looking to make a marginal impact through existing charities, and as such is not clearly applicable to policy (eg does not say much on areas such as financial stability). See some more thoughts on this here.


ALSO

I chatted to a researcher at an EA organisation saying they do not want to make policy suggestions as they will just get it wrong. I would hesitantly suggest that in some cases researchers are being too risk averse than I would consider warranted.



6. Campaigners & notables: minimise the risks

Campaigners

If you are actively campaigning for a policy or have politicians trying to speak to you or then you should be considering the risks. There is more that could be said on this topic then what I put here. But a few things to consider are:

  1. Risk mitigation should be proportionate. Obviously meeting junior civil servants needs less concern about risk than setting up a conversation with the Prime Minister.
  2. Avoid the unilateralists curse. This is when you act in a way that others are avoiding because you underestimate a risk everyone else has considered. Implement group decision-making procedures and deliberate with others before taking action. [Edit: This can be complicated. Always following the plan of the most risk adverse group member is not a good tactic. Also having the most risk seeking group member lead on actions is also not a good tactic. I may write more on this.]
  3. Connect with and learn from expertise. Ideally understand how to work with policy and political actors and what approaches and policies they might baulk from. If you do not have experience yourself try to connect with people who do.
  4. Understand the local context, such as the factors listed in the previous section.
  5. Be aware of the risks listed above and take action to address them. Some specific examples of actions that could be taken to mitigate specific risks are:

Policy researchers

If you are developing policy that someone is likely to campaign on, things to consider are:

  1. Ensure experts are involved, ideally from government. Do the background research into the topic area. Talk to people with expertise in the area of policy you are developing (and ideally in the country you are focused on). If possible co-create policies with the relevant civil servants. (In the UK you can directly email the relevant government department or get in touch with the London EA policy community via policy@ealondon.com).
  2. Understand the local context and why the existing policy is the way it is (see: Chesterton’s Fence).
  3. Design sound robust boring policy. Consider potential failure modes and look for crucial considerations that could change the sign of the expected impact and suggest policy that is robust to failure. Avoid innovation, sometimes new ideas are necessary but on the whole you can go along way to suggesting great policy by making suggestions based on evidence of existing best practice in a relevantly similar policy area. Also be explicit about the assumptions made.


Conclusion

I hope this collection of thoughts proves useful to people working in this space. Feedback always welcome.



Annex: Reasons to be cautious of a bednets policy

It was suggested to me that the EA community should lobby governments to spend more development money on bednets. This is not necessarily a terrible idea but I had some thoughts on why this reasoning could be wrong:

1.

THERE ARE KNOWN MORE EFFECTIVE THINGS TO FUND.

Bednets are just the most effective that have a big funding gap. Eg. TB treatments are more effective than bednets but do not have a funding gap because governments and Gates are funding them, so have not been recommended by Givewell. So shifting government funds to bednets could shift funds away from these known even more effective uses of to funds.

2

SWITCHING COSTS

The larger the scale of your donation the greater the switching costs. A small donor moving funds has very little effect. But at the scale of a government a fund move has significant costs and can do considerable damage if not well managed. It is not super obvious that bed-nets are more effective enough to make it worth the switching costs (eg see: https://www.results.org.uk/cases/mind-gap)

3

THERE ARE MORE OPTIONS OPEN TO GOVERNMENTS.

Governments can do more than just offer funding. They can start new projects or have political influence or encourage trade or do new RCTs. This considerably widens the scope of interventions beyond that open to a donor. Maybe many of these are more effective than bednets. Eg maybe governments should be leading the way in trying new things. Maybe moving funds to bednets could damage these other projects.

4

IT MAY BE THE WRONG APPROACH TO FOCUS ON THE BEST INTERVENTIONS

With donors you have a large pool of money that is on the whole being spent very poorly. So you make an impact by researching how it can be spent well and telling donors and then some fraction is spent well. With governments (at least in the UK) you have a large pool of money that is being spent well. Maybe the best way to have an impact on that is to researching where it is being spent worst and making the case against that use of funds. Flipping the charity approach on its head.

5

CHESTERTON'S FENCE / EPISTEMIC HUMILITY

Don't change it unless you know why it is going wrong and why you know better

ETC ETC


With thanks to comments from Ollie B, Jared B, Alex F and EdoArad.


Khorton @ 2019-12-09T15:44 (+19)

Thanks for writing this. I agree wholeheartedly.

It was especially refreshing when you wrote, "In advocacy policies need to be of an appropriate scale for the actor you are talking to, so only aiming for low-scale policies hampers campaigners’ ability to push for change."

I regularly have people ask me to do things that cost a few thousand pounds. In my job, I normally don't have the time or opportunity to design a policy that costs less than £10 million. Campaigners think I'm more likely to act on their ideas if they ask for a small policy, but actually the opposite is often true.

SiebeRozendal @ 2019-12-12T02:18 (+3)

Sam, this is a good post on an important topic! I believe EA's policy-thinking is very underdeveloped and I'm glad you're pulling the cart here! I look forward to seeing more posts and discussions on effective policy.

Is there an active network/meeting point for people to learn more about policy from an EA perspective?

atlasunshrugged @ 2019-12-13T11:48 (+1)

There's an EA and Policy FB group https://www.facebook.com/groups/450247668487258/

Aaron Gertler @ 2020-05-22T03:56 (+2)

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

As more EA energy goes into policy change, the community will benefit from having good heuristics about making change happen. I appreciate the author’s focus on this important goal, as well as:

  • Their use of bold text to call attention to important points.
  • Their realistic approach to risk management, and their acknowledgement that risk can’t be entirely removed from political advocacy. (I sometimes see ideas being pushed against because they are “risky”, without much consideration for how those risks might be reduced, or how they actually impact the idea’s expected value.)
  • Their willingness to call out specific ideas as being risky, and to explain the risks (rather than inventing a sample idea that doesn’t necessarily have actual supporters in the community, which is the approach I might have taken — and which I think wouldn’t have worked as well)

The winning comments

I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize

Denkenberger @ 2019-12-13T04:24 (+2)

Thanks - very helpful. I'm curious if you think the U.S. allowing lots of immigration to stay more powerful than China to possibly reduce the chance of great power war is a terrible idea.

weeatquince @ 2019-12-16T18:42 (+3)

Hi, ditto what Khorton said. I don’t have a background that has lead me to be able to opine wisely on this.

My initial intuition is: I am unconvinced by this. From a policy perspective you make a reasonable case that more immigration to the US could be very good, but unless you had more certainty about this (more research, evidence, cases studies, etc), I would worry about the cost of actively pushing out a US vs China message.

But I have no expertise in US politics so I would not put much faith in my judgment.

Khorton @ 2019-12-13T18:59 (+3)

Can't answer for Sam, but I think it's really hard to know if something's a workable policy idea unless you're close to the people trying to implement it. If you're not familiar with the policy area, the best you can do is say "Yep, that's definitely terrible" or "maybe that's an okay idea but it's probably still terrible".