AI Incident Reporting: A Regulatory Review

By Deric Cheng, Elliot Mckernon, Convergence Analysis @ 2024-03-11T21:02 (+10)

This article is the first in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series.

Let us know in the comments if this format is useful, if there are any topics you'd like us to cover, or if you spy any key errors / omissions!

Context

AI incident reporting refers to an emerging series of voluntary practices or regulatory requirements for AI labs to document any unexpected events, malfunctions, or adverse effects that arise from the deployment of AI systems. Such mechanisms are designed to capture a wide range of potential issues, from privacy breaches and security vulnerabilities to biases in decision-making. 

The rationale behind incident reporting is to create a feedback loop where regulators, developers, and the public can learn from past AI deployments, leading to continuous improvement in safety standards and compliance with legal frameworks. By systematically documenting incidents, stakeholders can identify patterns, understand the root causes of failures, and implement corrective measures to prevent recurrence.

Historically, incident reporting has been a highly effective tool used across a variety of industries for decades to mitigate risk from still-developing technologies. 

Incident reporting in AI is still in its nascent stages, with a variety of approaches being explored globally. The specific requirements for incident reporting, such as the types of incidents that must be reported, the timeframe for reporting, and the level of detail required, can vary significantly between jurisdictions and sectors.

The most prominent public example of an AI incident reporting tool today is the AI Incident Database, launched by the Responsible AI Collaborative. This database crowdsources incident reports involving AI technologies as documented in public sources or news articles. It’s used as a tool to surface broad trends and individual case studies regarding AI safety incidents. As a voluntary public database, it doesn’t adhere to any regulatory standards nor does it require input or resolution from the developers of the AI tool involved. 

Current Regulatory Policies

China

The PRC appears to be set on developing a governmental incident reporting database, as it announced a new set of Draft Measures on the Reporting of Cybersecurity Incidents on Dec 20th, 2023. The new measures categorize cybersecurity incidents into four categories of severity (“Extremely Severe”, “Severe”, “Relatively Severe”, and “General”), and requires that the top three levels (“Critical Incidents”) are reported to governmental authorities within one hour of occurrence. The criteria for meeting the level of “Critical” incidents include the following

Though this set of measures does not directly mention frontier AI models as a target for enforcement, any of the negative outcomes above resulting from the use of frontier AI models would be reported under the same framework. This draft measure can be understood as the Cyberspace Administration of China (CAC) pursuing two major goals: 

  1. Seeking to consolidate and streamline a variety of disparate reporting requirements across various laws around cybersecurity incidents.
  2. Seeking to provide further regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI technologies.

Elsewhere, the leading Chinese AI regulatory measures each make reference to reporting key events (specifically the distribution of unlawful information) to the Chinese government, but none of them have specific requirements for the creation of an incident reporting database: 

The EU

The EU AI Act requires that developers of both "high-risk" AI systems and “general purpose AI” (“GPAI”) systems set up internal tracking and reporting systems for “serious incidents” as part of their post-market monitoring infrastructure. 

As defined in Article 3(44), a serious incident is: any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

In the event that such an incident occurs, Article 62 requires that the developer reports the incident to the relevant authorities (specifically the European Data Protection Supervisor) and cooperate with them on an investigation, risk assessment, and corrective action. It specifies time limits for reporting and specific reporting obligations.

The US

The US does not currently have any existing or proposed legislation regarding reporting databases for AI-related incidents. However, the Executive Order on AI contains some preliminary language directing the Secretary of Health and Human Services (HHS) and the Secretary of Homeland Security to establish new programs within their respective agencies. These directives essentially request the creation of domain-specific incident databases: 

Convergence’s Analysis

In the next 2-3 years, the US, EU, and China will have established mandatory incident reporting requirements by AI service providers for “severe” incidents encompassing AI technologies.

However, such governmental compliance requirements represent only the minimum base layer of an effective network of incident reporting systems to mitigate risk from AI technologies. In particular, there exist several notable precedents from other domains of incident reporting that have yet to be developed or addressed by the AI governance community: 


SummaryBot @ 2024-03-12T15:47 (+1)

Executive summary: AI incident reporting is an emerging practice of documenting unexpected events or adverse effects from AI systems, and current regulations in China, the EU, and US are beginning to establish requirements for severe incidents, but voluntary reporting systems and international coordination are still lacking.

Key points:

  1. Incident reporting creates a feedback loop for stakeholders to learn from AI failures and implement corrective measures. It has been effective in other industries like aviation and workplace safety.
  2. China's draft cybersecurity measures require reporting critical AI incidents within 1 hour, and other Chinese AI regulations mention reporting unlawful information.
  3. The EU AI Act requires developers to report serious incidents that lead to death, health damage, infrastructure disruption, rights violations, or property/environmental damage.
  4. The US lacks AI-specific incident reporting legislation, but has some preliminary directives for domain-specific incident databases in areas like IP theft and healthcare.
  5. In the next 2-3 years, the US, EU and China will likely establish mandatory reporting requirements for severe AI incidents, enforced through fines. However, voluntary reporting systems, near-miss reporting, and international coordination are critical gaps that still need to be addressed.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.