A standard Risk Register for AI & LLMs

CRAID have developed a comprehensive list of AI Business and System risks cross-referenced to AVID SEP, MITRE Atlas, and the OWASP ML & LLM lists where relevant. The following is just an extract (to avoid consumption of our IP by LLMs and competitors alike)

CRAID STANDARD AI risk Register (extract)

Risk name (taxonomy cross references)DescriptionProbabilityPotential Impact
Transformation expectations (CRAID SA.TX)Leadership team expectations of business outcome (competitiveness, cost reduction, headcount reduction, revenue, process change, product value) unrealistic when compared to capability of current AI systemsMED to HIGH. It is hard not to get carried away with the hype around AIFailure (real or perceived) of AI initiatives
Confidentiality and IP (CRAID SA.IP, AVID S0200, MITRE ATL.T0001)Reliance or use of potentially vulnerable or unprotectable publicly available research, models, architectures and data in development
Use of third party and open source models and applications with unknown training data provenance
Use of third party models and applications with known training data provenance but with undeveloped or unrealistic Copyright or IP policies
HIGH to VERY HIGH. Publishers already taking legal action to protect their works from being used as training dataIndividual and Class actions and/or publicity relating to Copyright and IP theft or misuse
Software Vulnerability (AVID S0100)Exploitation of a vulnerability in the system around AI model—a traditional vulnerabilityVERY LOW to MED – In line with % of high criticality defects of existing systemsFailure of protective systems and Guardrails
Supply Chain Compromise (AVID S0200)Compromising development components of a ML model, e.g. data, model, hardware, and software stack.MED to HIGH depending on level of inspection of Model and Software dependencies (including foundation models and open source components)Complete behavioural failure of AI systems
Model inversion (AVID S0501)It is possible to reconstruct training data through strategic queriesMED to VERY HIGH depending on level of exfiltration testing and guardrailsIf the model is exposed to external users, loss of confidentiality, privacy and IP

Even if internal use only, bypassing of cyber security controls over data assets
Diagram: Extract from CRAID Standard AI Risks checklist. Treatment and mitigation columns available in full table.

To hear how you can access the full AI Risk Register, contact CRAID.