The Need for an Effective AI Incident Reporting Framework

By Strad Slater @ 2025-11-13T08:53 (+2)

This is a linkpost to https://williamslater2003.medium.com/the-need-for-an-effective-ai-incident-reporting-framework-e888ad6dd54e

Quick Intro: My name is Strad and I am a new grad working in tech wanting to learn and write more about AI safety and how tech will effect our future. I'm trying to challenge myself to write a short article a day to get back into writing. Would love any feedback on the article and any advice on writing in this field! 

 

AI is becoming more ingrained with society. Self-driving cars flood the streets of major US cities. Most internet browsers by now have implemented AI assistants into their search results. YouTube and TikTok are filled with AI generated videos.

The more AI gets dispersed throughout society, the more AI related incidents are likely to occur. Plenty newsworthy examples come to mind such as ChatGPT encouraging and teaching people how to harm themselves, or xAI’s model, Grok, calling itself “Mechahitler” as it spewed out antisemitic remarks on X. Between 2010 to 2023, AI related incidents increased 10x.

Given that this trend is likely to continue, it might be useful to have some way to analyze these incidents and learn from our mistakes to improve for the future. This motivation sparked the creation of two reports from the Center for Security and Emerging Technology, one in March of 2024 arguing for the crafting of a hybrid strategy for AI incident reporting and another in January of 2025 providing a possible framework for such a strategy.

Everyone has the chance of being effected by an AI incident so I thought it would be useful to go over these reports and explain what an effective framework would look like.

An Effective AI Incident Reporting Framework

Both reports emphasize that an effective framework for AI incident reporting should be both hybrid and standardized.

A hybrid framework means that multiple strategies are used for reporting. The strategies specified are mandatory, voluntary and citizen reporting.

Mandatory reporting entails that organizations involved in an AI related incident would be required to report it to a reporting body, such as a government agency.

Voluntary reporting entails the encouragement of individuals and groups involved in an incident to report it.

Citizen reporting entails the utilization of journalists and organizations acting as watchdogs to report incidents that they find.

The report emphasizes the importance of the framework including all three strategies for reporting as each one in isolation carries its own limitations. Voluntary and citizen reporting usually leave out a lot of incidents since there is not a requirement to report. Mandatory reporting while more effective in catching a wide range of incidents, might miss ones that fall out of the standard idea of what an AI related incident is.

The other crucial criteria for an effective framework is standardization. Many of the problems with current AI reporting is the fragmentation of organizations collecting data. Because of this, no highly followed, standard format for how people should report an incident exists. This variation in formats make it difficult to compare and analyze reporting data effectively.

An effective reporting framework for AI incidents should be easy to use and understand, adaptable to the changing capabilities of AI and provide functional data for the analysis of AI harms and creation of AI safety measures. With this in mind, here are some of the metrics that the authors of the reports propose for an effective reporting format:

The authors propose that these metrics should be a required part of the format for mandatory reporting. In addition, these metrics should also be included in the format for voluntary reporting but with some metrics being optional in order to decrease the resistance to actually volunteering.

On top of this, the authors recommend that an independent investigation agency should be created that can look into reported AI incidents from an objective, 3rd-party position. This agency would execute root cause analysis of incidents similar to what is done in other fields such as transportation and healthcare. This analysis could then be used to come up with new guidelines and policy initiatives that would help prevent similar incidents from occurring in the future.

Finally, the authors briefly mention the potential benefit of exploring automated data collection strategies. Automated data collection for incident reporting is used in many fields. For example in aviation, flight recorders are used to provide contextual information that give a better idea of what caused an incident. A similar type of strategy could be used to collect data on the internal workings of an AI model during an incident in order to help in root cause analysis.

 

Incident reporting is a regular part of any technological field. Standards for incident reporting often become more solidified as the technology’s ability to cause disruptions in society increases. AI seems to be on that path right now which warrants a look into more effective incident reporting frameworks such as what was discussed in the CSET reports. Creating more robust frameworks before AI becomes even more ingrained into society would be beneficial as it would allow us to better utilize data from incidents to come.

So if you found this framework interesting, feel free to read the 2024 report here and the 2025 report here. More research into this framework is definitely needed and doing so could have a big impact on how we respond to AI incidents in the future!