Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.9.2 Processes for Responsible Use of AI Systems

This article provides guidance on how to implement the ISO 42001: 2023 A.9.2 Processes for Responsible Use of AI Systems

ISO 42001 Control Description

The organisation shall establish and document processes that govern the responsible use of AI systems within the organisation, addressing the authorisation, oversight, and management of AI use activities to ensure that AI systems are used in a manner consistent with the organisation's policies, applicable legal obligations, and ethical commitments.

Control Objective

To ensure that the use of AI systems within the organisation is governed by defined, documented, and consistently applied processes that prevent misuse, support accountability, and maintain alignment between AI use activities and the organisation's responsibilities to its stakeholders, to affected individuals, and to applicable regulatory frameworks.

Purpose

The deployment of AI systems into operational use does not, of itself, ensure responsible use. Without explicit processes governing how AI systems may be used, by whom, and under what conditions, organisations face significant risks: systems may be applied to purposes beyond their intended scope, outputs may be acted upon without adequate human review, and accountability for decisions informed by AI may become diffuse or unclear.

Responsible use processes serve to operationalise the organisation's AI policy within the day-to-day activities of those who interact with AI systems. They establish the guardrails that translate high-level commitments — to fairness, transparency, human oversight, and legal compliance — into concrete operational requirements. Without these processes, even well-designed AI systems can be used in ways that compromise those commitments.

This control recognises that the organisation's obligations do not end at the point of system deployment. The manner in which AI systems are used on an ongoing basis is as consequential for outcomes as any aspect of their design or development. Processes for responsible use therefore represent a critical layer of governance that complements technical controls and lifecycle management activities.


Guidance on Implementation

Defining Permitted Use Parameters

The organisation shall document the parameters within which each AI system may be used, including the scope of permissible use cases, the categories of input data that may be processed, and the categories of decision or action that AI outputs may inform or support. These parameters shall be grounded in the system's intended use as defined in the system concept documentation and the results of impact and risk assessments.

Use parameters shall explicitly address any categories of use that are prohibited, including use cases that would extend the system beyond its validated scope, use of the system with populations or data types for which it has not been evaluated, and any application of the system in contexts where its use would conflict with applicable law or organisational policy.

Authorisation and Access Controls

The organisation shall establish a process for authorising individuals and functions to use each AI system, ensuring that access to AI systems is granted only to personnel with the requisite competencies and on a need-to-use basis. Authorisation processes shall include assessment of whether the individual or function has received appropriate training and guidance on responsible use requirements.

Access control mechanisms shall be implemented to restrict AI system use to authorised individuals and to support the audit of use activities. Where AI systems produce outputs that are used to inform consequential decisions, the organisation shall ensure that authorised use includes clear accountability for reviewing and acting upon those outputs.

Human Oversight Requirements

For AI systems whose outputs may affect individuals or inform consequential decisions, the organisation shall define the human oversight requirements that apply to the use of those outputs. These requirements shall specify the level of human review required before outputs are acted upon, the competencies required of the individuals performing that review, and the circumstances under which escalation or independent review is required.

Human oversight requirements shall be proportionate to the risk profile of the AI system and the potential consequences of acting on its outputs. The organisation shall ensure that defined oversight requirements are communicated to all users and are embedded in operational workflows, so that oversight cannot be bypassed as a matter of routine practice.

Documenting Use Processes

The organisation shall document the processes governing AI system use at a level of detail sufficient to enable consistent application by all authorised users. Documentation shall address standard operating procedures for AI system use; input data preparation and quality requirements; procedures for reviewing and acting upon AI outputs; escalation and exception handling procedures; and requirements for recording use activities.

Use process documentation shall be maintained under version control and shall be updated whenever material changes occur to the AI system, its intended use context, or the results of risk and impact assessments.

Training and Awareness for Users

The organisation shall ensure that all individuals authorised to use AI systems receive appropriate training on the responsible use processes applicable to each system. Training shall address the system's intended purpose and limitations, applicable use parameters and prohibited uses, human oversight requirements, escalation and reporting procedures, and the individual's accountability in relation to AI-informed decisions.

Training records shall be maintained and used to confirm that individuals are appropriately prepared before being granted authorisation to use an AI system.

Monitoring and Assurance of Responsible Use

The organisation shall establish mechanisms to monitor compliance with responsible use processes on an ongoing basis. Monitoring activities shall include periodic review of use logs and records, assessment of whether use activities remain within defined parameters, and review of any escalations or incidents arising from AI use. Findings from monitoring activities shall be reported to relevant governance functions and shall be used to inform improvements to use processes.


Related Controls

  • A.2.2 – AI Policy: Responsible use processes shall operationalise the commitments and principles established in the AI policy, ensuring that policy requirements are translated into day-to-day practice.
  • A.5.4 – Human Oversight of AI Systems: Human oversight requirements established under this control shall be consistent with the organisation's broader human oversight framework and shall be designed to support meaningful human review of AI outputs.
  • A.6.2.2 – AI System Requirements and Specification: Use parameters shall be grounded in the intended use and constraints documented in the requirements specification.
  • A.9.3 – Objectives for Responsible Use of AI Systems: Responsible use processes shall support achievement of the objectives established for the responsible use of each AI system.
  • A.9.4 – Intended Use of AI Systems: Use processes shall incorporate controls to prevent use of AI systems beyond their documented intended use.