ISO 42001: 2023 - A.9.3 Objectives for Responsible Use of AI Systems
This article provides guidance on how to implement the ISO 42001: 2023 A.9.3 Objectives for Responsible Use of AI Systems
ISO 42001 Control Description
The organisation shall identify, document, and apply specific objectives that guide the responsible use of AI systems, ensuring that use activities are oriented towards defined goals relating to fairness, transparency, accountability, human oversight, and the protection of the interests of individuals and groups affected by AI system outputs.
Control Objective
To ensure that the use of AI systems within the organisation is directed by clearly articulated, documented objectives that reflect the organisation's ethical commitments and its obligations to affected stakeholders, providing a reference framework against which the responsible character of AI use activities can be assessed and improved.
Purpose
The responsible use of AI systems is not simply a matter of following defined procedures; it also requires that the organisation is oriented towards substantive goals that reflect what responsible use means in practice. Without documented objectives for responsible use, organisations risk treating compliance with process requirements as an end in itself, rather than as a means to achieving genuinely responsible outcomes for those affected by AI systems.
Responsible use objectives make explicit what the organisation is seeking to achieve through its AI governance activities. They create a reference point for evaluating whether current practices are effective, identifying areas where improvement is required, and communicating the organisation's commitments to external stakeholders. In this sense, responsible use objectives serve both an internal governance function and a transparency function, enabling the organisation to demonstrate the orientation of its AI use practices.
This control also recognises that different AI systems may require different responsible use objectives, reflecting the variation in their risk profiles, the nature of their intended applications, and the characteristics of the populations they affect. Responsible use objectives should therefore be established at both the organisational level — reflecting overarching commitments — and at the level of individual AI systems, where system-specific considerations apply.
Guidance on Implementation
Establishing Organisational-Level Responsible Use Objectives
The organisation shall establish a set of organisational-level objectives that define its overarching commitments to responsible AI use. These objectives shall address the AI governance principles that the organisation has committed to, including fairness and non-discrimination in AI outputs; transparency about the use of AI systems and their role in decisions affecting individuals; accountability for decisions informed or made by AI systems; protection of individual privacy and data rights; and meaningful human oversight where AI systems inform consequential decisions.
Organisational-level responsible use objectives shall be consistent with and shall elaborate upon the commitments expressed in the organisation's AI policy. They shall be documented, approved by an appropriate governance authority, and communicated to all personnel involved in AI system use.
Defining System-Specific Responsible Use Objectives
In addition to organisational-level objectives, the organisation shall define responsible use objectives specific to each AI system, reflecting the particular risks, use context, and affected populations associated with that system. System-specific objectives shall be informed by the results of risk and impact assessments conducted for the system and shall address the responsible use considerations that are most significant given the system's intended application.
System-specific objectives may address matters such as the specific fairness criteria applicable to the system's outputs, the transparency information to be provided to individuals affected by AI-informed decisions, the human oversight arrangements required for the system's use context, and the performance standards that must be maintained to ensure the system's outputs remain reliable and appropriate for use.
Measurability and Monitoring of Objectives
Responsible use objectives shall, to the extent practicable, be stated in terms that are capable of measurement or assessment. The organisation shall define indicators or criteria that can be used to evaluate progress against each objective and shall establish a monitoring process to track performance against responsible use objectives on a periodic basis.
Where objectives cannot be expressed in fully quantitative terms, the organisation shall establish qualitative criteria and assessment approaches that enable a structured evaluation of whether the objective is being achieved. The results of monitoring activities shall be reported to relevant governance functions and shall be used to inform improvements in AI use practices.
Alignment with External Frameworks and Obligations
Responsible use objectives shall reflect applicable external obligations, including legal and regulatory requirements governing AI use in the relevant jurisdiction and sector, contractual obligations to customers or partners regarding responsible AI practices, and the commitments made to affected individuals through transparency communications. Where external frameworks — such as national AI governance regulations or sector-specific codes of conduct — articulate specific responsible use expectations, the organisation's objectives shall be reviewed for consistency with those expectations.
Review and Update of Responsible Use Objectives
The organisation shall review responsible use objectives at planned intervals and following material changes to the AI system, its use context, or the regulatory environment. The review shall assess whether existing objectives remain appropriate and sufficient, whether new objectives are required in light of changes to the system's risk profile or use context, and whether monitoring activities have identified areas where objectives require strengthening. The outcomes of objective reviews shall be documented and, where objectives are revised, updated documentation shall be communicated to relevant personnel.
Integration with Performance Evaluation
Responsible use objectives shall be integrated into the organisation's broader AI performance evaluation activities, ensuring that the responsible dimensions of AI use are assessed alongside operational performance. Performance against responsible use objectives shall be included in management review reporting, enabling senior leadership to maintain visibility of the organisation's progress in achieving its responsible use commitments.
Related Controls
- A.2.2 – AI Policy: Responsible use objectives shall give practical expression to the principles and commitments established in the AI policy, providing a more granular reference framework for use governance.
- A.5.4 – Human Oversight of AI Systems: Human oversight requirements shall be reflected in responsible use objectives for systems whose outputs inform consequential decisions.
- A.9.2 – Processes for Responsible Use of AI Systems: Responsible use processes shall be designed and evaluated by reference to the responsible use objectives established under this control.
- A.9.4 – Intended Use of AI Systems: Responsible use objectives shall incorporate provisions to ensure that AI systems remain within their documented intended use throughout their operational life.
-
A.8.3 – Documenting AI System Performance: Performance documentation shall address progress against responsible use objectives, enabling assessment of the responsible character of AI use over time.