Classifying AI RISK
It’s useful to have a set of standardised risk categories to build on. CRAID has developed a Strategic AI Risk framework and uses the existing risk framework from AVID for AI Systems risk classification.
CRAID STRATEGIC AI risk TAXONOMY
Before you knowingly embark on AI development, usage and experimentation, it is worth considering a set of strategic business risks that most organisations will face from the sudden availability of new AI technologies to themselves and their competitors. Note that it’s very likely that you have unknowingly already started using AI technologies in your business.
Organisations will hopefully be thinking about these kind of things when contemplating the use of AI for business change, but this is going to be a time of highly accelerated business change and it is worth spending time “thinking the unthinkable.”

CRAID AI Strategic Risk/Opportunity taxonomy
| STRATEGIC | OPERATIONAL | |
|---|---|---|
| PEOPLE | Employee engagement | Organisational learning Decentralised AI adoption |
| MARKET | Customer and market needs and requirements Competition intelligence | Market engagement and education |
| PROCESS | Technology awareness (operational) | Data, AI engineering and data science capabilities Product and service introduction velocity |
| LEADERSHIP | Transformation enthusiasm & fear Compound & unthinkable change Technology awareness (strategic) | Project & programme management Change management |
| GRC | Governance as usual Confidentiality and IP | Risk appetite calibration Legalities and liabilities AI and Privacy Regulations |
AI System RISK taxonomy
The effect view of the AI Vulnerability Database (AVID) taxonomy is very useful as a standardised classification of risks at the AI model or system/application level. AVID is is an open-source knowledge base of failure modes for Artificial Intelligence (AI) models, datasets, and systems. These sit under your organisation’s strategic business AI risks, and apply once you have decided to implement AI technology. The three high-level categories are Security, Ethics and Performance.
SECURITY
Software Vulnerability
Supply-chain Compromise
- Model compromise
- Software compromise
Over-permissive API
- Information leak
- Excessive queries
Model Bypass
- Bad features
- Insufficient training data
- Adversarial example
Exfiltration
- Model inversion
- Model theft
Data Poisoning
- Ingest poisoning
ETHICS
Bias/discrimination
- Group fairness
- Individual fairness
Explainability
- Global
- Local
User Actions
- Toxicity
- Polarisation/exclusion
Misinformation
- Deliberative
- Generative
PERFORMANCE
Data Issues
- Data drift, Concept drift
- Data entanglement,
- Data quality issues, Feedback loops
Model Issues
- Resilience/stability
- OOD generalisation
- Scaling, Accuracy
Privacy
- Anonymisation
- Randomisation
- Encryption
Safety
- Psychological
- Physical
- Socioeconomic
- Environmental
The benefit of adopting this taxonomy is that it is used by many organisations and LLM vulnerability scanners, allowing you to correlate vulnerabilities detected in your systems with the above categories of AI risk.