Our approach to ai risk management

First of all, what type of AI systems are at risk?

Along with the incredible potential of AI based systems to automate, advise and generate content comes a number of sometimes complex risks that are often qualitatively different from those faced when deploying more conventional applications.

CRAID have developed an approach to evaluating and mitigating the business and technical risks derived from using AI technology to power new applications and generate new outputs.  These AI-powered applications include, for example, 

  • LLM based chatbots fine-tuned with private data
  • Generated content and images/video being used by conventional applications
  • Off-the-shelf Office Productivity tools and co-pilots with new AI-powered capabilities
  • AI systems with radically better-than-human task automation capabilities

CRAID APPROACH TO AI RISK MANAGEMENT

BUSINESS & AI
CONTEXT

Business objectives

Business risk appetite

Business application requirements
Risk Governance processes & templates

Cyber Security, Confidentiality & Privacy controls

Supply-chain cyber security process
ESG, Regulation & Framework objectives

AI ESG risk appetite

AI development capabilities

AI application & model testing capabilities

Software process maturity

Data Science capabilities

Foundation technology selections

Data availability

AI and guardrail functional and behavioural requirements

Red Team formation

IDENTIFY

AI risk classifications

Standard AI risk checklist

Applicable frameworks, standards, regulations, etc. (MITRE Atlas, OWASP, NIST, NCSC, EU AI act, etc.)

Customised in-house framework for understanding business, behavioural and technical AI Risk

Foundation AI model risk identification

Open Source risk identification
AI application & model vulnerability testing results (current)

Review of known AI failures, behavioural anomalies and breaches

AI risk register

EVALUATE

Map of vulnerability and test results, framework to common AI Risk framework

Map of foundation model contract and open source licensing and testing risks to common framework

Identified gaps in AI testing, monitoring, contractual cover

Business analysis of impact (Financial, Reputational, Regulatory)

AI risk register (with impacts)

TREAT

Risk treatment plan (avoid, accept, reduce/prevent, transfer)

Avoid (e.g. by excluding selected foundation models and open source components, changed system behavioural requirements)

Transfer of risk (to insurers, foundation model providers, etc.)

Reduce/prevent (e.g. with AI application & model vulnerability test changes, introducing HITL)

Accept (there will always be AI risks you must live with)

MEASURE

AI application & model vulnerability testing results (updated)

AI risk register (updated with treated values)

Assessment of any changes in Context

Assessment in any gaps in risk identification stage

Escalations to Corporate Risk Register

HOW WE DEVELOPED OUR APPROACH TO AI RISK

There is of course lots of existing AI risk & governance *stuff* (frameworks, playbooks, tools, governance methods, threat models and regulations) already around that you may be familiar with.  These have all fed into the CRAID approach and include:

Find out about our classification of AI Risks

See an extract from our Standard AI Risk Register

AI & LLM Red teaming

The approach recommended by CRAID is to assemble a Red Team to build and review the AI Risk Register, made up of people with expertise across AI technology, Law, Product Management, Customer Behaviour and potentially areas such as Knowledge Management and Data Science. CRAID Consultants with extensive experience in AI, Risk Management, Systems Development, Cybersecurity and Product Strategy are available to support AI Red Teams.

We have adopted the Microsoft planning guidelines for LLM Red teaming in order to support clients wishing to take advantage of the 2024 updated Microsoft Copilot Copyright Commitment.

Have questions?