AI-Risk in the State of the European Union Address

By Sam Bogerd @ 2023-09-13T13:27 (+25)

This is a linkpost to

Every year in September the President of the European Commission (The EU's executive branch) addresses the European Parliament and sets out the Commission's plan for the coming year, this year there was a surprising focus on the risks causes by AI, including quoting the CAIS statement. The section on AI & Digital contained the following text, as released by the European Commission (highlights are mine): 

Honourable Members, 

When it comes to making business and life easier, we have seen how important digital technology is. 

It is telling that we have far overshot the 20% investment target in digital projects of NextGenerationEU. 

Member States have used that investment to digitise their healthcare, justice system or transport network.

At the same time, Europe has led on managing the risks of the digital world. 

The internet was born as an instrument for sharing knowledge, opening minds and connecting people. 

But it has also given rise to serious challenges.

Disinformation, spread of harmful content, risks to the privacy of our data. 

All of this led to a lack of trust and a breach of fundamental rights of people. 

In response, Europe has become the global pioneer of citizen's rights in the digital world. 

The DSA and DMA are creating a safer digital space where fundamental rights are protected. 

And they are ensuring fairness with clear responsibilities for big tech. 

This is a historic achievement – and we should be proud of it. 

The same should be true for artificial intelligence.  

It will improve healthcare, boost productivity, address climate change.

But we also should not underestimate the very real threats. 

Hundreds of leading AI developers, academics and experts warned us recently with the following words:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

AI is a general technology that is accessible, powerful and adaptable for a vast range of uses - both civilian and military. 

And it is moving faster than even its developers anticipated. 

So we have a narrowing window of opportunity to guide this technology responsibly.

I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars:  guardrails, governance and guiding innovation. 

Firstly, guardrails. 

Our number one priority is to ensure AI develops in a human-centric, transparent and responsible way.

This is why in my Political Guidelines, I committed to setting out a legislative approach in the first 100 days.

We put forward the AI Act – the world's first comprehensive pro-innovation AI law.

And I want to thank this House and the Council for the tireless work on this groundbreaking law.  

Our AI Act is already a blueprint for the whole world.

We must now focus on adopting the rules as soon as possible and turn to implementation.


The second pillar is governance. 

We are now laying the foundations for a single governance system in Europe.

But we should also join forces with our partners to ensure a global approach to understanding the impact of AI in our societies.

Think about the invaluable contribution of the IPCC for climate, a global panel that provides the latest science to policymakers.

I believe we need a similar body for AI – on the risks and its benefits for humanity.

With scientists, tech companies and independent experts all around the table. 

This will allow us to develop a fast and globally coordinated response – building on the work done by the Hiroshima Process and others.

The third pillar is guiding innovation in a responsible way.

Thanks to our investment in the last years, Europe has now become a leader in supercomputing – with 3 of the 5 most powerful supercomputers in the world.

We need to capitalise on this. 

This is why I can announce today a new initiative to open up our high-performance computers to AI start-ups to train their models.

But this will only be part of our work to guide innovation.

We need an open dialogue with those that develop and deploy AI.

It happens in the United States, where seven major tech companies have already agreed to voluntary rules around safety, security and trust. 

It happens here, where we will work with AI companies, so that they voluntarily commit to the principles of the AI Act before it comes into force.

Now we should bring all of this work together towards minimum global standards for safe and ethical use of AI.