ISO 42001: 2023 - A.9.4 Intended Use of AI Systems
This article provides guidance on how to implement the ISO 42001: 2023 A.9.4 Intended Use of AI Systems
ISO 42001 Control Description
The organisation shall ensure that AI systems are used only in accordance with their documented intended use, and shall establish controls to prevent the deployment of AI systems for purposes, in contexts, or with populations for which they have not been designed, evaluated, and validated.
Control Objective
To protect the integrity of the organisation's responsible use commitments and the interests of individuals affected by AI system outputs by ensuring that AI systems are not applied beyond the boundaries established through the design, development, and evaluation processes, and that deviations from intended use are identified, managed, and prevented.
Purpose
The intended use of an AI system defines the bounded context within which the system has been designed to operate and within which its outputs have been evaluated for reliability, fairness, and appropriateness. When an AI system is used beyond this context — whether by applying it to tasks for which it was not designed, using it with population groups whose characteristics differ materially from those in the training data, or operating it in conditions that differ from those assumed during validation — the assurances established through design and evaluation processes no longer hold.
Unintended use may arise from deliberate decision-making, from organisational pressures to extend the utility of existing AI investments, or from gradual organisational drift in which the boundaries of system use expand incrementally without formal assessment. In each case, the result can be that the organisation is relying on AI outputs in contexts where their reliability is unknown and where the risks to affected individuals have not been assessed.
This control recognises that maintaining fidelity to intended use is a core governance responsibility, not merely a technical matter. It requires the organisation to establish both the documentation that makes intended use explicit and the processes that prevent or manage deviations. In doing so, it supports the organisation's ability to stand behind its AI system outputs and to account for the decisions those outputs inform.
Guidance on Implementation
Documenting Intended Use
The organisation shall ensure that the intended use of each AI system is clearly and comprehensively documented, specifying the tasks or decisions the system is designed to support; the categories of individuals, data, or situations to which the system is intended to be applied; the operational environment in which the system is designed to function; and the boundaries of the system's validated scope, including any known limitations that restrict its applicability.
Intended use documentation shall be established during the requirements and design phases of the AI system lifecycle and shall be maintained as a controlled document, updated to reflect any changes to the system's scope that have been assessed and approved through the change management process.
Communicating Intended Use to Users
The organisation shall ensure that all individuals authorised to use an AI system have a clear understanding of its intended use, including both what the system is designed to support and what it is not designed or validated to do. Intended use information shall be incorporated into user training and shall be referenced in operational process documentation.
Where AI systems are provided to or used by personnel in multiple functions or locations, the organisation shall ensure that intended use information is communicated in a manner that is accessible and comprehensible to all relevant users, and that it is reinforced through periodic training and awareness activities.
Controls to Prevent Use Beyond Intended Scope
The organisation shall implement controls to prevent the use of AI systems beyond their documented intended scope. Controls shall address the processes for approving new use cases before they are operationalised, the requirements for re-evaluation of the system before its use is extended to new contexts or populations, and the channels through which users can raise concerns about use activities that appear to exceed intended boundaries.
Where technically feasible, the organisation shall consider implementing system-level controls — such as input validation, use case restrictions, and output monitoring — that provide automated support for maintaining use within intended boundaries. Such technical controls shall complement, but shall not substitute for, the governance processes and human oversight arrangements that maintain responsible use.
Managing Use Expansion and Use Case Changes
The organisation shall establish a formal process for assessing and approving any proposed extension of an AI system's use beyond its documented intended scope. This process shall require that proposed use expansions are assessed against the results of existing risk and impact assessments, and that additional assessment activities are conducted where the proposed extension introduces materially new risks or affects populations not previously considered.
Use expansions shall not be operationalised without formal approval by an accountable governance authority and, where the extension is material, without appropriate updates to the system's documentation, risk assessment, and validation records. The organisation shall treat informal or unapproved use expansion as a governance deficiency and shall address such instances through its incident and corrective action processes.
Monitoring for Unintended Use
The organisation shall establish monitoring activities to identify instances where AI systems are being used, or are being proposed for use, beyond their intended scope. Monitoring shall include periodic review of use records and patterns, assessment of use expansion proposals, and consideration of information received through feedback and incident reporting channels. Where monitoring identifies instances of unintended use, the organisation shall investigate promptly, address any immediate risks, and implement corrective measures to prevent recurrence.
Related Controls
- A.6.2.2 – AI System Requirements and Specification: The intended use documentation maintained under this control shall be grounded in the requirements specification established during the AI system lifecycle.
- A.6.2.6 – AI System Verification and Validation: Validation activities define the scope of conditions under which the AI system has been demonstrated to perform adequately; intended use controls shall reflect the boundaries established by those activities.
- A.9.2 – Processes for Responsible Use of AI Systems: Responsible use processes shall incorporate intended use controls as a core component, ensuring that use parameters reflect and reinforce the documented scope of the system.
- A.9.3 – Objectives for Responsible Use of AI Systems: Maintaining fidelity to intended use shall be reflected in the responsible use objectives established for each AI system.
- A.7.6 – AI System Change Management: Any approved extension of an AI system's intended use shall be managed as a material change through the change management process, ensuring appropriate assessment and documentation.