AI Model Registries: A Regulatory Review
By Deric Cheng, Elliot Mckernon, Convergence Analysis @ 2024-03-22T16:01 (+6)
This article is the third in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.
This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series.
What are model registries? Why do they matter?
Note: The phrase “model registry” may also often be used to refer to a (typically) private database of trained ML models, often used as a version control system for developers to compare different training runs. This is a separate topic from model registries for AI governance.
Model registries, in the context of AI regulation, are centralized governance databases of AI models intended to track and monitor AI systems usually in real-world use. These registries typically mandate the submission of a new algorithm or AI model to a governmental body prior to public release.
Such registries will usually require basic information about each model, such as their purpose or primary functions, their computational size, and features of their underlying algorithms. In certain cases, they may request more detailed information, such as the model’s performance under particular benchmarks, a description of potential risks or hazards that could be caused by the model, or safety assessments designed to prove that the model will not cause harm.
Model registries allow governmental bodies to keep track of the AI industry, providing an overview of key models currently available to the public. Such registries also function as a foundational tool for AI governance - enabling future legislation targeted at specific AI models.
These registries adhere to the governance model of “algorithms as an entry point”, allowing governments to focus their regulations on individual algorithms or AI models rather than regulating the entire corporation, access to compute resources, or creating targeted regulations for specific algorithmic use cases.
As these model registries are an emerging form of AI governance with no direct precedents, the requirements, methods of reporting, and thresholds vary wildly between implementations. Some registries may be publicly accessible, providing greater accountability and transparency, whereas others may be limited to regulatory use only. Some may enforce reporting of certain classes of AI algorithms (such as China), whereas others may only require leading AI models with high compute requirements (such as the US).
What are some precedents for mandatory government registries?
While algorithm and AI model registries are a new domain, many precedent policies exist for tracking the development and public release of novel public products. For example, reporting requirements for pharmaceuticals is a well-established and regulated process, as monitored by the Food and Drug Administration (FDA) in the US and the European Medicines Agency (EMA) in the EU. Such registries typically require:
- Basic information, such as active ingredients, method of administration, recommended dosage, adverse effects, and contraindications.
- Mandatory clinical testing demonstrating drug safety and efficacy before public release.
- Postmarket surveillance, including requirements around incident reporting, potential investigations, and methods for drug recalls or relabeling.
Many of these structural requirements will transfer over directly to model reporting, including a focus on transparent reporting, pre-deployment safety testing by unbiased third-parties, and postmarket surveillance.
What are current regulatory policies around model registries?
China
The People’s Republic of China (PRC) announced the earliest and still the most comprehensive algorithm registry requirements in 2021, as part of its Algorithmic Recommendation Provisions. It has gone on to extend the scope of this registry, as its subsequent regulations covering deep synthesis and generative AI also require developers to register their AI models.
- Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.
- Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.
- Interim Generative AI Measures, Article 17: the PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.
- Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.
The EU
Via the EU AI Act, the EU has opted to categorize AI systems into tiers of risk by their use cases, notably splitting permitted AI systems into “high-risk” and “limited-risk” categorizations. In particular, it requires that “high-risk” AI systems must be entered into an EU database for tracking.
- As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for “high-risk” AI systems.
The US
The US has chosen to actively pursue “compute governance as an entry point” - that is, it focuses on categorizing and regulating AI models by the compute power necessary to train them, rather than by the use-case of the AI model.
- In particular, it has concentrated its binding AI regulations around restricting the export of high-end AI chips to China in preparation for a geopolitical AI arms race.
- As of Biden’s Executive Order on AI, there is now a set of preliminary rules requiring the registration of models meeting a certain criteria of compute power. However, this threshold has currently been set beyond the compute power of any existing models, and as such is likely only to take impact in the next generation of LLMs.
- Section 4.2.b specifies that the reporting requirements are enforced for models trained with greater than 1026 floating-point operations, or computing clusters with a theoretical maximum computing capacity of 1020 floating-point operations per second.
- For comparison, GPT-4, one of today’s most advanced models, was likely trained with approximately 1025 floating-point operations.
- Reporting requirements seem intentionally broad and extensive, specifying that qualifying companies must report on an ongoing basis:
- Section 4.2.i.a: Any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats.
- Section 4.2.i.b: The ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.
- Section 4.2.i.c: The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.
- Section 4.2.b specifies that the reporting requirements are enforced for models trained with greater than 1026 floating-point operations, or computing clusters with a theoretical maximum computing capacity of 1020 floating-point operations per second.
How will model registries be used in the near-term future?
Model registries appear to be a critical tool for governments to proactively enforce long-term control over AI development.
- All leading governmental bodies have now incorporated some form of a model registry as a supplement to their existing regulatory portfolio.
- In particular, the types of models that each governmental body requires to be registered is a clear indicator of its longer-term priorities when it comes to AI regulation.
- We should expect that additional safety assessments and recurring monitoring reports will be required for models from leading governmental bodies as AI capabilities accelerate.
The US, EU, and China are pursuing substantially differing goals in their approaches to model registries as an entry point to regulation.
- In China, the model registry appears to be first and foremost a tool for aligning algorithms with the political and social agendas of the Chinese Communist Party. It’s focused largely on tracking algorithmic use cases that involve recommending and generating novel content to Chinese users, particularly those with “public opinion properties” or “social mobilization capabilities”.
- In the EU, AI legislation is preoccupied primarily with protecting the rights and freedoms of its citizens. As a result, the “high-risk” AI systems for which it requires registration are confined primarily to use cases deemed dangerous in terms of reducing equity, justice, or access to basic resources such as healthcare or education.
- The US government appears to have two primary goals: to control the potential risks and distribution of frontier AI models, and to avoid limiting the current rate of AI development.
- In particular, it has decided to require registration for cutting-edge LLMs solely based on their raw performance metrics, rather than considering any specific use case, in contrast to both China and the EU.
- Additionally, it appears to be placing a priority on protecting these models from external cybersecurity threats, requiring that organizations report the measures it has taken to protect these models from being accessed or stolen. Given its current position on the export of high-end AI chips and its long history with military IP theft, it’s clear that the US views the protection of cutting-edge AI models as a national security threat.
- Finally, none of these model registry requirements will come into effect until the next generation of frontier AI models is released sometime in 2024 or 2025. To this point, the Biden administration has cautiously avoided creating any binding regulations that might impede the rate of AI capabilities development among leading American AI labs.
Model registries will serve as a foundational tool for governments to enact additional regulations around AI development.
- Much in the same way drug registries are used as a foundational tool for the FDA to control the development and public usage of pharmaceuticals, model registries will be a critical component for governments to control public AI model usage.
- Model registries will enable the creation and improved enforcement of regulations such as:
- Mandating specific sets of pre-deployment safety assessments, or certification by certain organizations before public deployment
- Transparency requirements for AI models such as disclosures
- Incident reporting involving specific models and civil liabilities for damages caused by specific AI models
- Postmarket surveillance such as post-deployment evaluations, regulatory investigations, and the potential disabling of non-compliant or risky models
SummaryBot @ 2024-03-25T13:36 (+1)
Executive summary: Model registries are an emerging form of AI governance that require developers to submit information about AI models to a centralized database, enabling governments to track and regulate AI development.
Key points:
- Model registries require submitting basic information about AI models (purpose, size, algorithms) and sometimes more detailed data (benchmarks, risks, safety assessments).
- Registries allow governments to monitor the AI industry, target regulations at specific models, and enforce an "algorithms as entry point" governance approach.
- Precedents exist in other domains like pharmaceutical registries, which require safety testing, incident reporting, and postmarket surveillance.
- China has the most comprehensive registry requirements, the EU requires registration of "high-risk" systems, and the US focuses on compute power thresholds.
- Model registries indicate differing regulatory priorities: content control (China), citizen rights (EU), and national security (US).
- Registries will enable future regulations like mandatory safety assessments, transparency, incident reporting, and postmarket evaluations.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Heramb Podar @ 2024-03-23T16:47 (+1)
Also, here is a link if anyone wants to read more on the China AI registry which seems to be based on the model cards paper
Heramb Podar @ 2024-03-23T16:45 (+1)
Nice summarization! I generally see model registries as a good tool to ensure deployment safety by logging versions of algorithms and tracking spikes in capabilities. I think a feasible way to push this into the current discourse is by setting it in the current algorithmic transparency agenda.
Potential risks here include who decides what is a new version of a given model. If the nomenclature is left in the hands of companies, it is prone to be misused. Also, the EU AI Act seems to take a risk-based approach, with the different kinds of risks being more or less lines in the sand.
Another important point is what we do with the information we gather from these sources - I think there are "softer"(safety assessments, incident reporting) and "harder"(bans, disabling) ways to go about this. It seems likely to me that governments are going to want to lean into the softer bucket to enable innovation and have some due process kick in. This is probably more true with the US which has always favoured sector-specific regulation.