Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.6.2.7 AI System Deployment

This article provides guidance on how to implement the ISO 42001: 2023 A.6.2.7 AI System Deployment

ISO 42001 Control Description

The organisation shall manage the deployment of AI systems in a planned and controlled manner, ensuring that deployment activities are authorised, that operational safeguards are in place prior to deployment, and that the deployed system is monitored from the point at which it enters operational use.

Control Objective

To ensure that AI systems are introduced into operational use in a structured, risk-aware manner that preserves the integrity of the production environment, validates system behaviour under live conditions, and establishes the monitoring and oversight mechanisms required to manage the system throughout its operational life.


Purpose

Deployment is the transition point at which an AI system begins to produce outputs that affect real-world decisions, processes, or individuals. This transition introduces risks that cannot always be fully anticipated during development and testing — including integration with live data sources, exposure to the full diversity of operational inputs, and interaction with user behaviours that may differ from assumptions made during design.

Effective deployment governance ensures that this transition is managed carefully, with appropriate controls to detect problems early, limit the scope of exposure in the event of system failure, and enable rapid response when issues arise. It also ensures that the accountability structures and oversight mechanisms required for responsible AI operation are in place before the system begins to affect individuals or organisational outcomes.

This control addresses the deployment event itself and the immediate post-deployment period, recognising that the initial period of live operation is particularly important for identifying whether a system performs as expected when exposed to conditions that could not be fully replicated in testing.


Guidance on Implementation

Deployment Authorisation

Deployment of an AI system shall be subject to formal authorisation by an accountable party within the organisation. Authorisation shall be contingent on the satisfactory completion of verification and validation activities as required under A.6.2.6, including any mandatory pre-deployment review. The deployment authorisation shall be documented, including the identity of the authorising party, the date of authorisation, and any conditions or constraints attached to the deployment.

Systems that have not completed required testing or that have known deficiencies exceeding acceptable risk thresholds shall not be authorised for deployment until those deficiencies have been addressed or accepted through a formally documented risk acceptance process.

Deployment Planning

The organisation shall prepare a deployment plan for each AI system that addresses the sequence and scope of deployment activities, any staged or phased deployment approach, rollback procedures, and the criteria that would trigger rollback or suspension of the system. The deployment plan shall also address communication to users and operators about the system's deployment, including any relevant information about its capabilities, limitations, and intended use.

Where deployment involves integration with existing systems or processes, integration testing in the target environment shall be completed before the system is released for operational use.

Staged and Phased Deployment

Where the risk profile of the AI system warrants it, the organisation shall adopt a staged deployment approach that initially limits the scope, scale, or user population exposed to the system. Staged deployment allows system performance to be assessed under live conditions at limited scale before broader rollout, reducing the potential impact of issues that emerge post-deployment.

Criteria for progression between deployment stages shall be defined in advance, including performance thresholds and the absence of material issues, and progression shall require documented review and authorisation.

Operational Safeguards

Prior to deployment, operational safeguards relevant to the system's risk profile shall be confirmed as being in place. These include human oversight mechanisms where the system's outputs inform consequential decisions, escalation pathways for cases where system outputs are uncertain or disputed, and user guidance or training appropriate to the system's intended use.

Where the system operates autonomously or with limited human oversight, the rationale for this configuration shall have been reviewed and accepted through the risk management process, and compensating controls shall be in place.

User and Operator Communication

The organisation shall ensure that users and operators of the AI system are provided with adequate information about its capabilities, intended use, known limitations, and the conditions under which its outputs should and should not be relied upon. This information shall be documented and made available to relevant parties before or at the point of deployment.

Communication shall be sufficient to enable informed use of the system and to support appropriate human oversight of its outputs in operational contexts.

Post-Deployment Monitoring Initiation

Monitoring of the deployed system shall commence from the point of deployment. Initial monitoring shall be appropriately intensive, with the organisation paying particular attention to system performance, error rates, and any indicators of unexpected behaviour during the early operational period. The monitoring framework established under A.6.2.8 and the broader operational monitoring requirements shall be activated as part of the deployment process.


Related Controls

  • A.6.2.6 – AI System Verification and Validation: Deployment shall be conditional on the satisfactory completion of verification and validation activities and an approved pre-deployment review.
  • A.6.2.8 – AI System Documentation: Deployment authorisation records, deployment plans, and post-deployment monitoring records form part of the AI system documentation.
  • A.7.5 – AI System Monitoring: The monitoring framework shall be operational from the point of deployment, with particular attention to system behaviour during the initial operational period.
  • A.6.1.2 – AI Risk Assessment: The deployment approach, including the use of staged rollout and operational safeguards, shall be informed by the risk assessment.
  • A.9.3 – AI System Supply Chain: Where deployment involves third-party infrastructure or operational services, supply chain controls shall be confirmed as being in place prior to deployment.