AI Incident Tracker

This work is also hosted by the MIT AI Risk Repository.

All incidents in the AI Incident Database have been processed using an LLM and classified according to the MIT Risk Repository causal and domain taxonomies. The severity of harm and National Security Impact have been assessed for each incident.

This is intended as a proof of concept to explore the potential capabilities and limitations of a scalable incident analysis framework.
This blog post discusses the background, the approach taken, preliminary results and next steps.

Please feel free to explore the analysis through the dashboard pages below and share feedback.

What’s New? Major Update - 23 June 2025

Latest incidents from AIID added

  • This brings us completely up to date with the latest incidents on the AI Incident Database as at 23 June 2025 (up to incident ID #1116)

LLM Analysis

  • Some updates have been made to how the tool processes reports in order to improve validity of analysis.

  • The full dataset has been classified using this latest iteration of the tool (reclassification of all incidents that were classified with the previous version, to ensure consistency across the dataset).

Harm Severity

  • The harm severity analysis uses a new scale, simplifying scoring to 1-5 points (previously 0-10).

  • I have tried to remove ambiguities from the definitions in the scale and to make it possible to grade incidents consistently with greater objectivity.

  • Impact Profile - visualises the reported harm caused in each category as a spider-chart, making it easy to compare incidents or identify certain profiles of interest.

National Security Impact Assessment

  • Assesses NatSec impact of each incident in 5 categories: Physical Security & Critical Infrastructure / Information Warfare & Intelligence Security / Sovereignty & Government Functions / Economic & Technological Security / Societal Stability & Human Rights using this framework.

  • Classifies threat for Imminence, Novelty and Autonomy

  • NatSec Incident View presents the NatSec Impact Profile as a spider chart

Potential Causes of Incident

  • A Fishbone/Ishikawa Diagram presents a number of potential causes for each incident, organised by category. (Incident View)

Ambiguities and Alternative Interpretations

  • ‘Ambiguities identified’ and ‘Alternative interpretations’ - makes it easier to spot analyses where further review of the reports or investigation is required

Primary Goal of AI Systems

Now includes the primary goal of the AI system involved in each incident, classified according to a taxonomy based on the AIID GMF. (Risk Classification)

Impact Profile

Explore the AI Incident Tracker:

National Security Impact Profile

Potential Causes - Fishbone Diagram

Example outputs: