Our approach to ai risk management
First of all, what type of AI systems are at risk?
Along with the incredible potential of AI based systems to automate, advise and generate content comes a number of sometimes complex risks that are often qualitatively different from those faced when deploying more conventional applications.
CRAID have developed an approach to evaluating and mitigating the business and technical risks derived from using AI technology to power new applications and generate new outputs. These AI-powered applications include, for example,
- LLM based chatbots fine-tuned with private data
- Generated content and images/video being used by conventional applications
- Off-the-shelf Office Productivity tools and co-pilots with new AI-powered capabilities
- AI systems with radically better-than-human task automation capabilities

CRAID APPROACH TO AI RISK MANAGEMENT

HOW WE DEVELOPED OUR APPROACH TO AI RISK
There is of course lots of existing AI risk & governance *stuff* (frameworks, playbooks, tools, governance methods, threat models and regulations) already around that you may be familiar with. These have all fed into the CRAID approach and include:
- NIST AI Risk Management Framework (AI RMF) and Playbook
- The EU AI Act
- OWASP Top Ten lists for Machine Learning security and LLM applications
- The MITRE ATLAS attack framework for AI based systems
- The AI Vulnerability Database (AVID) taxonomy
- The Cross-industry standard process for data mining, CRISP-DM
- The Garak LLM vulnerability scanner
- The Giskard quality management system for ML systems (and tools)
- NCSC guidelines for secure AI development

Find out about our classification of AI Risks
See an extract from our Standard AI Risk Register
AI & LLM Red teaming
The approach recommended by CRAID is to assemble a Red Team to build and review the AI Risk Register, made up of people with expertise across AI technology, Law, Product Management, Customer Behaviour and potentially areas such as Knowledge Management and Data Science. CRAID Consultants with extensive experience in AI, Risk Management, Systems Development, Cybersecurity and Product Strategy are available to support AI Red Teams.
We have adopted the Microsoft planning guidelines for LLM Red teaming in order to support clients wishing to take advantage of the 2024 updated Microsoft Copilot Copyright Commitment.
