Effective policy? Requiring liability insurance for dual-use research

By Owen Cotton-Barratt @ 2014-10-01T18:36 (+10)

Hi all,

I thought people might be interested in some of the policy work the Global Priorities Project has been looking into. Below I'm cross-posting some notes on one policy idea. I've talked to several people with expertise in biosafety and had positive feedback, and am currently looking into how best to push further on this (it will involve talking to people in the insurance industry).

In general quite a bit of policy is designed by technocrats and already quite effective. Some other areas are governed by public opinion which makes it very hard to have any traction. When we've looked into policy, we've been interested in finding areas which navigate between these extremes -- and also don't sound too outlandish, so that they have some reasonable chance of broad support.

I'd be interested in hearing feedback on this from EAs. Criticisms and suggestions also very much welcome!

---

Requiring liability insurance for dual-use research with potentially catastrophic consequences

These are notes on a policy proposal aimed at reducing catastrophic risk. They cover some of the advantages and disadvantages of the idea at a general level; they do not yet constitute a proposal for a specific version of the policy.

Research produces large benefits. In some cases it may also pose novel risks, for instance work on potential pandemic pathogens. There is widespread agreement that such 'dual use research of concern' poses challenges for regulation.

There is a convincing case that we should avoid research with large risks if we can obtain the benefits just as effectively with safer approaches. However, there do not currently exist natural mechanisms to enforce such decisions. Government analysis of the risk of different branches of research is a possible mechanism, but it must be performed anew for each risk area, and may be open to political distortion and accusations of bias.

We propose that all laboratories performing dual-use research with potentially catastrophic consequences should be required by law to hold insurance against damaging consequences of their research.

This market-based approach would force researcher institutions to internalise some of the externalities and thereby:

Current safety records do not always reflect an appropriate level of risk tolerance. For example, the economic damage caused by the escape of the foot and mouth virus from a BSL-3 or BSL-4 lab in Britain in 2007 was high (mostly through trade barriers) and could have been much higher (the previous outbreak in 2001 caused £8 billion of damage). If the lab had known they were liable for some of these costs, they might have taken even more stringent safety precautions. In the case of potential pandemic pathogen research, insurers might require it to take place in BSL-4 or to implement other technical safety improvements such as “molecular biocontainment”.

Possible criticisms and responses

 


undefined @ 2014-10-02T03:00 (+7)

Excellent idea! The rest of this comment is going to be negative, but my balance of opinion is not reflected in the balance of words.

One potential downside is that dangerous research would move to other countries. However, this effect would be reduced by the dominance of the anglosphere in many areas of research. Additionally, some research with the potential to cause local but not global disasters represents a national but not international externality, in which case other countries are also appropriately incentivised to adopt sensible precautions. So on net this does not seem to be a very big concern.

Another is the lack of any appreciation for public choice in this argument. Yes, I agree this policy would be good if implemented exactly as described. But policies that are actually implemented rarely bare much resemblance to the policies originally advocated by economists. Witness the huge gulf between the sort of financial reform that anyone actually advocates and Dodd-Frank, which as far as I'm aware satisfied basically no-one who was familiar with it. The relevant literature is the public choice literature. So here are some ways this could be misimplimented:

Why require insurance rather than just impose liability? Shouldn’t this be a decision for the individuals?

Some work may be sufficiently risky that the actors cannot afford to self-insure. In such circumstances it makes sense to require insurance (just as we require car insurance for drivers).

Drivers are generally individuals, whereas research is generally done by institutions. It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I'm sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.

undefined @ 2014-10-02T09:58 (+3)

It seems plausible to me that creditworthy institutions/individuals should not have to take out car insurance. If Oxford faced a potential liability in the billions, I'm sure it would insure. I guess the main threat comes from small, limited liability institutions whose only purpose is to do this one kind of research, and are thus unconcerned with the downside. Or large institutions with poor internal governance.

I agree that in general it's fine for creditworthy institutions to self-insure. The issue is that the scale of possible liability is large enough (billions of dollars, perhaps hundreds of billions of dollars) that even institutions which routinely self-insure against all risks as a matter of course may not be creditworthy against the worst outcomes. In some cases they are explicitly or implicitly state-backed, but if nobody in the chain has considered the possible liability you don't get the proper incentive effects. If there were a market so that the risk of the research were priced, I'd expect better governance even at institutions which self-insured.

undefined @ 2014-10-02T09:54 (+2)

I agree that there are some issues regarding the version of the policy that would actually be implemented. This is a large factor for requiring insurance rather than direct state regulation, and I think this offers a robustness which goes some way towards defusing your concerns.

For example:

Politically unpopular research is crushed by being deemed dangerous. Obvious targets include research into nuclear power, racial differences, or GMOs.

If there's just an insurance requirement, it's hard for extra costs to swell much above the true expected externalities (if it's safe, they should be able to find someone willing to insure it cheaply).

undefined @ 2014-10-04T00:40 (+2)

Yup, I agree again. Though there is still the risk that the political system might manufacture externalities to accuse the researchers of.

undefined @ 2014-10-02T03:47 (+2)

If Oxford faced a potential liability in the billions, I'm sure it would insure.

The managers of Harvard's endowment circa 2008 would beg to differ, I think. (It lost about $10 billion, nearly a third of its value.)

It seems like for some of these institutions, how long of a view they take is substantially determined by contingent factors like who's the university president at the time.

undefined @ 2014-10-02T21:50 (+3)

I worry about the "that will never happen" effect. Mandating that researchers take out the insurance prevents it being dismissed on that front, but how do we make the insurance agencies take it seriously?

It seems all too plausible that the insurer will just say "this will never happen, and if it does it will be unusual enough that we can probably hold the whole thing up in court - just give them a random number". For a big enough risk, if it happens then the insurer might expect to cease to exist in the upheaval, which also doesn't give them much incentive to give a good estimate.

In general, I'm not sure whether insurers are quite robust enough institutions to be likely to have rational decision procedures over risks that are this big and unlikely.

undefined @ 2014-10-04T00:43 (+1)

This is why the reinsurance market exists.

undefined @ 2014-10-03T11:09 (+1)

I agree that the process isn't going to be perfect. But the relevant question is whether it's sufficiently better than the status quo.

For what it's worth, I think the insurers may be more likely to over-hedge and only offer insurance at unreasonably high prices. That might be less of a problem (or it might make this whole thing politically infeasible).

undefined @ 2014-10-02T08:55 (+2)

Really interesting idea.

Two questions:

undefined @ 2014-10-02T10:06 (+3)

It seems to me that this kind of policy would risk decreasing the amount of research done on natural pandemics. If anything, this seems to be the kind of research there should be more rather than less of.

Interesting. I'm not sure whether we should expect it to decrease or increase the safe research done on natural pandemics. I would guess increase it slightly. There is quite a lot of research in this area with essentially no risk. This paper does a good job of explaining alternatives.

undefined @ 2014-10-02T10:03 (+1)

Not knowing anything about the insurance industry, I'm wondering if the market for this type of insurance be big enough in order for insurers to be willing to offer it?

Yes, there's a possible issue here. Insurers already have models for the effects of natural pandemics; pricing insurance on the research would need additional models for the chance of accidental release. It might be possible to subsidise this modelling as a public good, if that were required to enable a market.

undefined @ 2014-10-04T21:12 (+1)

Companies like Berkshire Hathaway are in generally happy to do one-off policies for strange and unusual risks, so it seems there wouldn't be much trouble getting insurance companies interested in serving this market.

undefined @ 2014-10-02T12:50 (+1)

I think this is an excellent idea but one thing I didn't understand: you said "catastrophic" risks and then mentioned foot and mouth disease which doesn't seem very catastrophic to me.

Are you proposing this for what the EA community would call "existential" risks (e.g. unfriendly AI)? Or just things on the order of a few billion dollars of damage?

undefined @ 2014-10-02T14:26 (+3)

This is really aimed at things which could cause damages in perhaps the $100 million - $1 trillion range. I think this would have a broadly positive effect on larger risks through two routes:

First, some larger risks come with associated smaller-scale risks, and you'd do similar things to reduce each of them. I think this is the case with the potential pandemic pathogen research. Requiring liability insurance won't get people to fully internalise the externalities associated with the tail risk, but it should make them take substantial steps in the right direction.

Second, a society which takes seriously a wider class risks of unprecedented low-probability high stakes events will probably be better at responding to existential risks as well.