Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.2.4 Review of the AI Policy

This article provides guidance on how to implement the ISO 42001:2023 control A.2.4 Review of the AI Policy

ISO 42001 Control Description

The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.

Control Objective

To provide management direction and support for AI systems according to business requirements.

Purpose

To ensure the AI policy remains current, relevant, and effective as the organisation's AI activities, technology landscape, regulatory environment, and business context evolve. Regular review prevents the policy from becoming outdated or misaligned with organisational needs.

Guidance on Implementation

Establishing Review Responsibility

A role approved by management should be responsible for:

  • Development of the AI policy
  • Periodic review and evaluation of the policy
  • Proposing updates and improvements
  • Coordinating with other policy owners for alignment (see Control A.2.3)

This role might be:

  • Chief AI Officer or AI governance lead
  • Information Security Manager (if AI is within ISMS scope)
  • Risk Manager or Compliance Officer
  • Quality Manager
  • Designated AI policy owner

Planned Review Intervals

Organisations should establish:

a) Regular review schedule - typically:
  • Annually for most organisations
  • More frequently (e.g., quarterly or semi-annually) for organisations with rapidly evolving AI usage, high-risk AI systems, or operating in highly regulated sectors
  • Less frequently (e.g., every 18-24 months) only if AI usage is minimal and stable

 

b) Triggers for additional review (unplanned reviews), including:

  • Significant changes to AI systems (new high-risk AI deployments, major system updates)
  • Changes to organisational strategy or business model affecting AI use
  • New legal or regulatory requirements affecting AI (e.g., EU AI Act implementation)
  • Changes to risk environment (new AI-related threats, incidents, or vulnerabilities)
  • Results from management review indicating policy inadequacy
  • Findings from internal or external audits
  • Significant AI-related incidents or near-misses
  • Stakeholder feedback indicating policy gaps
  • Organisational restructuring affecting AI responsibilities
  • Mergers, acquisitions, or divestitures
  • Changes to applicable standards (updates to ISO/IEC 42001 or related standards)

What the Review Should Assess

The review should evaluate:

a) Suitability - Is the policy still appropriate for the organisation's AI activities and context?
  • Has the organisation's AI usage expanded or changed?
  • Are new types of AI systems being deployed?
  • Has the organisation's role changed (e.g., from AI user to AI developer)?

b) Adequacy - Does the policy adequately address all necessary aspects?
  • Are all AI-related risks covered?
  • Are guiding principles still complete and relevant?
  • Are roles and responsibilities clearly defined?
  • Does it address emerging AI technologies or use cases?

c) Effectiveness - Is the policy achieving its intended outcomes?
  • Is the policy being followed in practice?
  • Are AI objectives being met?
  • Have there been policy violations or exceptions?
  • Do personnel understand and apply the policy?

Review Inputs

The policy review should consider:

a) Changes to the organisational environment:
  • New business strategies or objectives
  • Organisational restructuring
  • Changes to risk appetite
  • Stakeholder expectations

b) Changes to business circumstances:
  • New products or services using AI
  • Entry into new markets or sectors
  • Changes to customer base or use cases

c) Changes to legal conditions:
  • New regulations (e.g., EU AI Act, sector-specific AI rules)
  • Updated data protection requirements
  • Case law or regulatory guidance
  • Contractual obligations

d) Changes to technical environment:
  • Emergence of new AI technologies (e.g., generative AI adoption)
  • New tools or platforms
  • Technical vulnerabilities or threats
  • Best practices evolution

e) Management review results (from Clause 9.3)
  • Performance of the AI management system
  • Opportunities for improvement
  • Changes needed to AIMS

f) Audit findings:
  • Internal audit results
  • External audit or certification findings
  • Surveillance audit observations
g) Incident analysis:
  • AI-related security incidents
  • Safety incidents involving AI
  • Bias or discrimination incidents
  • Performance failures

h) Lessons learned:
  • From AI system deployments
  • From policy implementation challenges
  • From industry incidents or case studies

Implementation Steps

Organisations should:

1. Document the review process - Define procedures for policy review including:
  • Who conducts the review
  • Review frequency and triggers
  • Inputs to consider
  • Approval process for changes
  • Communication of updates

2. Schedule reviews - Establish planned review dates and communicate them to the responsible role


3. Conduct systematic reviews - Use a checklist or template to ensure all aspects are assessed consistently


4. Document review outcomes - Maintain records showing:

  • When the review was conducted

  • Who conducted it

  • What was assessed

  • Findings and conclusions

  • Changes recommended or made

  • Approval of changes


5. Update the policy as needed - When changes are identified:

  • Draft proposed updates

  • Obtain management approval

  • Version the policy appropriately

  • Communicate changes to affected personnel

  • Update related documents and policies (see Control A.2.3)


6. Monitor effectiveness

  • Track whether policy updates achieve intended improvements

Key Considerations

Integration with management review: Policy review should be informed by and feed into the management review process (Clause 9.3). Consider conducting policy review shortly before or as part of management review to ensure alignment.

Version control: Maintain clear version control of the AI policy, including:

  • Version number and date
  • Summary of changes
  • Approval signatures
  • Previous versions archived

Communication of changes: When the policy is updated:

  • Notify all affected personnel
  • Provide training if changes are significant
  • Update related procedures and work instructions
  • Ensure external parties (suppliers, partners) are informed if relevant

Don't review in isolation: Review the AI policy in the context of other policies to maintain alignment (Control A.2.3). If multiple policies are updated, coordinate timing and messaging.

Evidence for audits: Policy review records are key evidence for ISO 42001 certification audits. Ensure reviews are documented with clear evidence of assessment, decision-making, and follow-up actions.

Related Controls

Within ISO/IEC 42001:

  • A.2.2 AI policy
  • A.2.3 Alignment with other organisational policies
  • Management review (Clause 9.3)
  • Continual improvement (Clause 10.1)

Integration with ISO 27001 (if applicable):

  • A.5.1 Policies for information security (review requirements)