EU's importance for AI governance is conditional on AI trajectories - a case study

By MathiasKBπŸ”Έ @ 2022-01-13T14:58 (+31)

The goal of this post is to show that AI trajectories matter a great deal when evaluating how some institution is expected to influence AI governance. To show this I argue that the importance of the Brussels effect, one of the EU's levers of influence, is highly conditional on which AI trajectory we assume.

For those interested in learning more about the Brussels effect specifically, I know someone else is working on a paper that provides a much better analysis than I do here.

What is the Brussels effect?

The Brussels Effect describes the phenomenon of the European Union's regulation seemingly spreads to the rest of the world through similar laws and companies following EU regulation outside of Europe.

When larger economies such as the US or China don't regulate an area, the regulation defaults to EU. This is the case for privacy where, for example, Microsoft decided to make all their services GDPR compliant worldwide, rather than just for its EU users. Even though EU's regulation applies only to EU citizens, the rest of the world often becomes subject to it irregardless. Other governments are also often inspired by EU regulation when writing their own.

When does the Brussels effect take place?

The Brussels effect has five necessary requirements to take place:[1]

Ideology & interest

Is the European Union interested in regulating AI?

Sufficient market size of the EU

Is the potential of the EU market large enough for providers of AI services to spend the resources necessary to become compliant with EU regulation?

Regulatory capacity

Does the EU institutions have the capacity and mandate to regulate AI?

Inelastic targets

Are those affected by EU's regulation able to simply move elsewhere to avoid it?

Governments interested in using unregulated AI must benefit more than they do from membership of the Union, to justify leaving the Union to use unregulated AI. The same applies to citizens and companies of Europe, for whom unregulated AI must outweigh moving to a non-EU country.

Non divisibility

Are companies able to cheaply divide their services into one version that is compliant with EU regulation and another non-compliant version for the rest of the world?

 

 

The extent to which the Brussels effect will affect AI governance is conditional on how AI development progresses. To illustrate why, imagine two scenarios. One with a slow and continuous AI take-off, the other with a faster and more discontinuous take-off.

Slow take-off

In this scenario, AI capability progresses with a slow take-off speed. AI development is primarily driven by private enterprise. Progress comes in the shape of incremental improvements, each AI better than the last. In such a world there is good reason to believe the Brussels Effect will promulgate European AI regulation to the rest of the world, as all five requirements are met.

Fast take-off

In this scenario there is a fast take-off speed and development of AI is primarily driven by governments and few enterprises racing for a discontinuous payoff, where the winner largely takes all. In this world, AI development looks more like a Manhattan project than companies pursuing ever-improving iterations of GPT.

The EU has largely failed many of its external agendas. The EU has been unable to abolish torture, solve migration crises or achieve nuclear disarmament. These are also examples of issues where European legislation failed to live up to the five requirements needed for the Brussels Effect to take place. In a fast take-off world, I expect Europe's influence to be reduced as EU AI regulation in this scenario fails to live up to three of the five requirements needed to become subject to the Brussels Effect.


The scenarios can be described in the following table:

 Slow take-offFast take-off
Ideology & interestβœ“- EU is trying to regulate AI with the AI act and we can expect it to continue doing soβœ“ - No difference
Sufficient market size of the EUβœ“- The EU market-size is big enough that companies are unlikely to forego it to avoid regulationβœ— - Whichever government or company first develops transformative AI  stands to gain so much profit and power that foregoing the European market is worthwhile if it means winning the race
Regulatory capacityβœ“- The EU can expect to have the regulatory capacity as one of its core competencies is to maintain the European single marketβœ— - The Council will block regulation attempts that conflict with national interests of EU member states. In a race to AGI between non-EU nations, the EU institutions do not have diplomatic tools at its disposal powerful enough to significantly alter the conflict
Inelastic targetsβœ“- For governments, the benefits of using non-EU compliant AI must outweigh the benefits of leaving the Union. The same goes for European citizens and companiesβœ— - Companies whose AGI development is slowed by EU regulation will move elsewhere, or be beaten to the punch by those that do move. There is no strong profit motive for living up to European regulation
Non-divisibilityβœ“- Using GDPR as a historical precedent, we can expect companies of major AI products to prefer developing a single EU-compliant version rather than splitting their development efforts into multiple versions or foregoing the EU marketβœ“ - No difference

 

Depending on your expectations on how AI will be developed, your beliefs on the importance of the Brussels Effect should update accordingly.

I believe that better outlining on what axes AI trajectories can differ, and how they affect the levers of influence is an important step to evaluating the EU's importance for global AI governance. Hopefully this post has given an idea of why I think that is.

  1. ^

    These criteria are identified by Anu Bradford in her seminal book on the topic. She divides the Brussels effect into a de-facto and de-jure effect. For the purposes of this post the Brussels effect refers only to the de-facto effect.


MaxRa @ 2022-01-25T15:10 (+8)

Thanks for the post, I think it's really useful to get a better picture about interactions like these.

I wonder whether I really expect companies to end up being that averse to AI regulation: