This article outlines how to complete the AI Register (AI Inventory)
Purpose of the AI Register
The AI Register (also called the AI Inventory) is your centralised record of all AI/ML systems in use within your organisation — whether officially deployed or operating informally as "shadow/unapproved AI."
It supports:
-
AI governance and risk management
-
Compliance with regulations like the EU AI Act, ISO 42001, and GDPR
-
Internal transparency, audit readiness, and responsible innovation
Scope
Include all AI/ML tools, such as:
-
Internal models and algorithms (e.g. predictive scoring, NLP tools)
-
Third-party tools or APIs using AI (e.g. OpenAI, AWS Rekognition)
-
Off-the-shelf AI features in SaaS products
-
Shadow or unapproved AI use (e.g. unauthorised use of ChatGPT, automation scripts)
How to Complete Each Field
Field | What to Enter | Why It Matters |
---|---|---|
System / Tool Name |
Enter the name of the AI tool, model, or product (e.g. "ChatGPT", "Internal Risk Model v2") |
Identifies the asset clearly in audits and reports |
Business Use Case |
Briefly describe what the tool is used for (e.g. “automated CV screening”, “customer churn prediction”) |
Helps determine risk level, transparency needs, and regulatory scope |
Owner |
Name the team or person responsible for the tool |
Assigns accountability for oversight, risk, and data governance |
Vendor / Source |
List the source – Internal / Open-source / Vendor name (e.g. Google, HuggingFace, SAP) |
Clarifies third-party reliance and applicable licensing or security review steps |
Risk Category |
Use the drop down to select from: - Minimal - Limited - High - Unacceptable |
Helpe to identify and prioritise controls required. |
Approval |
Yes / No – was it approved by the appropriate data, legal, or IT authority? |
Helps identify Shadow AI and ensure proper governance has taken place |
Best Practices
-
Centralise ownership: Assign a responsible person per tool — ideally someone with oversight of risks and outputs.
-
Update regularly: Add new tools as they are piloted or adopted. Conduct periodic reviews to remove unused or replaced systems.
-
Categorise risks: Use the "Business Use Case" and "Approval" fields to prioritise oversight (e.g. anything involving people or money may be high-risk).
-
Flag unknown tools: Use “No” in the Approval field to flag and investigate unauthorised use.
Benefits of Completion
-
Helps determine which AI systems may fall under EU AI Act “high-risk” categories
-
Supports GDPR compliance by documenting systems that process personal data
-
Builds the foundation for AI-specific risk management under ISO 42001
-
Provides a clear audit trail for internal or external assessments
- Enhances ISO 42001 Clause 5.3 accountability by clearly assigning roles to each AI system