EU AI Act now has a section on general purpose AI systems

By MathiasKB🔸 @ 2021-12-09T12:40 (+64)

The EU AI Act is currently undergoing the ordinary legislative procedure, in which the European Parliament and Council can propose changes to the act.

A brief summary of the act is that systems defined as high-risk  will be required to undergo conformity assessments, which among other things requires the system to be monitored and have a working off-switch (longer summary and analysis here).

The Council's amendments have recently been circulated. Most importantly for longtermists, they include a new section for general purpose AI systems. For the first time ever regulating general AI is on the table, and for an important government as well!

The article reads:

Article 52a - General purpose AI systems 

  1. The placing on the market, putting into service or use of general purpose AI 
    systems shall not, by themselves only, make those systems subject to the 
    provisions of this Regulation. 
  2. Any person who places on the market or puts into service under its own 
    name or trademark or uses a general purpose AI system made available on 
    the market or put into service for an intended purpose that makes it subject 
    to the provisions of this Regulation shall be considered the provider of the 
    AI system subject to the provisions of this Regulation. 
  3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a 
    general purpose AI system made available on the market, with or without 
    modifying it, into an AI system whose intended purpose makes it subject to 
    the provisions of this Regulation. 
  4. The provisions of this Article shall apply irrespective of whether the general 
    purpose AI system is open source software or not.


Or in plain English, General Purpose AI Systems will not be considered high-risk, unless they are explicitly intended to be used for a high-risk purpose.

What are your reactions to this development?


weeatquince @ 2021-12-09T16:30 (+22)

Thank you for the update – super helpful to see.

 

What are your reactions to this development?

My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.

 

The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI. 

In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.

 

(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)

rohinmshah @ 2021-12-09T16:44 (+15)

How do they define general purpose AI systems?

Samuel Curtis @ 2021-12-11T00:16 (+11)

From (70a): "In the light of the nature and complexity of the value chain for AI systems, it is essential to clarify the role of persons who may contribute to the development of AI systems covered by this Regulation, without being providers and thus being obliged to comply with the obligations and requirements established herein. In particular, it is necessary to clarify that general purpose AI systems - understood as AI system [sic] that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. - should not be considered as having an intended purpose within the meaning of this Regulation."

Guy Raveh @ 2021-12-11T10:24 (+3)

This is somewhat strange to me, as even within the limited scope of short-term worries from AI, I could imagine many problems in deployed systems to stem from their general-purpose components, such as bias in image recognition models.

Davidmanheim @ 2021-12-12T07:15 (+2)

So... GPT-3. That's what they mean by AGI.

Charles He @ 2021-12-12T08:24 (+3)

GPT-3 does question answering, translation.

It seems like the exclusions could cover all commercially relevant "AI systems" or machine learning.

SamClarke @ 2021-12-13T19:13 (+14)

I think equally important for longtermists is the new requirement for the Commission to consider updating the definition of AI, and the list of high-risk systems, every 1 year. If you buy that adaptive/flexible/future-proof governance will be important for regulating AGI, then this looks good.

(The basic argument for this instance of adaptive governance is something like: AI progress is fast and will only get faster, so having relevant sections of regulation come up for mandatory review every so often is a good idea, especially since policymakers are busy so this doesn't tend to happen by default.)

Relevant part of the doc:

  1. As regards the modalities for updates of Annexes I and III, the changes in Article 84 introduce a new reporting obligation for the Commission whereby it will be obliged to assess the need for amendment of the lists in these two annexes every 24 months following the entry into force of the AIA.
MathiasKB @ 2021-12-09T12:50 (+13)

My own opinion is that it is a double-edged sword

The Council's change on its own weakens the act, and will allow companies to avoid conformity assessments for the exactly the AI systems that will need them the most.

But the new article also makes it possible to impose requirements that will only solely affect general purpose systems, without burdening the development of all other low-risk AI with unnecessary requirements.

MarkusAnderljung @ 2021-12-20T13:45 (+12)

Overall, I think it's not that surprising that this change is being proposed and I think it's a fairly reasonable. However, I do think it should be complemented with duties to avoid e.g. AI systems being put to high-risk uses without going through a conformity assessment and that it should be made clear that certain parts of the conformity assessment will require changes on the part of the producer of a general system if that's used to produce a system for a high-risk use.

In more detail, my view is that the following changes should be made: Goal 1: Avoid general systems being without the appropriate regulatory burdens kicking in. There are two kinds of cases one might worry about: (i) general systems might make it easier to produce a system that should either be covered by the transparency requirements (e.g. if your system is a chatbot, you need to tell the user that) or the high-risk requirements, leading to more such systems being put on the market without them being registered.

Proposed solution: Make it the case that providers of general systems must do certain checks on how their model is being used and whether it is being used for high risk uses without that AI system having been registered or having gone through the conformity assessment. Perhaps this would be done by giving the market surveillance authorities (MSAs) the right to ask providers of general models about certain information about how the model is being used. In practice, it could look as follows: the provider of the general system could have various ways to try to detect whether someone is using their system for something high risk (companies like OpenAI are already developing tools and systems to do this). If they detect such a use, they are required to check that against the database of high risk AI systems deployed on the EU market. If there's a discrepancy, they must report it to the MSA and share some of the relevant information as evidence.

(ii) There’s a chance that individuals using general systems for high-risk uses without placing anything on the market will not be covered by the regulation. That is, as the regulation is currently designed, if a company where to use public CCTV footage to assess the number of women vs. men walking down a street, I believe that would be a high risk use. But if an individual does it, it might not count as a high risk use because nothing is placed on the market. This could end up being an issue, especially if word about these kinds of use cases spreads. Perhaps a more compelling example would be people starting to use large language models as personal chat bots. The proposed regulation wouldn’t require the provider of the LLM to add any warnings about how this is simply a chatbot, even if the user starts e.g. using it as a therapist or for medical advice.

Proposed solution: My guess is that the solution is that the provision suggested above is expanded to also look for individuals using the systems for high risk or limited risk uses and that they are required to stop such use.

Goal 2: (perhaps most important) Try to make it the case that crucial and appropriate parts of the conformity assessment will require changes on the part of the producer of the general system.

This could be done by e.g. making it the case that the technical documentation requires information that only the producer of the general model would have. It would plausibly already be the case with regards to the data requirements. It would also plausibly be the case regarding robustness. It seems worth making sure of those things. I don't know if that's a matter of changing the text of the legislation itself or about how the legislation will end up being interpreted.

One way to make sure that this is the case is to require that deployers only use general models that have gone through a certification process or that has also passed the conformity assessment (or perhaps a lighter version). I’m currently excited about the latter.

Why am I not excited about something more onerous on the part of the provider of the general system?

rohinmshah @ 2021-12-11T14:59 (+10)

For the first time ever regulating general AI is on the table, and for an important government as well!

Given the definition of general AI that they use, I do not expect this regulation to have any more to do with AGI alignment than the existing regulation of "narrow" systems.

(This isn't to say it's irrelevant, just that I wouldn't pay specific attention to this part of the regulation over the rest of it.)