Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.8.4 Feedback and Improvement

This article provides guidance on how to implement the ISO 42001: 2023 A.8.4 Feedback and Improvement

ISO 42001 Control Description

The organisation shall establish mechanisms to collect, assess, and act upon feedback regarding AI system performance and impacts, and shall use feedback, together with operational performance data and lessons learned from incidents, to drive continuous improvement of AI systems and the management system that governs them.

Control Objective

To ensure that the organisation has effective channels for receiving information about AI system performance and impacts from a broad range of sources — including operational personnel, users, and affected individuals — and that this information is systematically analysed and used to improve both individual AI systems and the organisation's broader AI governance practices.

Purpose

Operational monitoring and internal performance measurement provide important signals about AI system behaviour, but they cannot provide a complete picture. Users who interact with AI systems day-to-day, and individuals whose interests are affected by AI-mediated decisions, may observe issues, identify mismatches between system behaviour and need, or experience adverse impacts that are not captured in system logs or performance metrics. Without structured mechanisms to collect and act upon this information, the organisation risks operating in a state of partial blindness, unable to learn from the full range of evidence available about how its systems are performing in practice.

Feedback mechanisms serve both an operational and an ethical function. Operationally, they provide intelligence that can inform improvements to system performance, design, and deployment. Ethically, they give voice to the interests and experiences of those affected by AI systems — including individuals who may lack other means of challenging decisions that have affected them — and signal the organisation's commitment to accountability and responsiveness.

The feedback and improvement process also supports the principle of continuous improvement that underpins effective management systems, creating a formal mechanism by which experience gained in operation is converted into tangible improvements across the AI lifecycle.


Guidance on Implementation

Feedback Collection Mechanisms

The organisation shall establish clearly defined and accessible mechanisms for receiving feedback about AI system performance and impacts. Mechanisms shall be designed to capture feedback from operational personnel responsible for system management; users who interact with the system directly; individuals affected by AI-mediated decisions, including those who wish to challenge outcomes; and any external stakeholders with relevant observations or concerns.

Feedback channels shall be communicated to relevant parties and shall be accessible in a manner proportionate to the risk profile of the system and the populations it affects. Where the AI system makes decisions that significantly affect individuals, the availability of feedback channels shall form part of the transparency information provided about the system.

Assessment and Prioritisation of Feedback

Received feedback shall be assessed in a structured manner to determine its significance, including whether it indicates a performance deficiency, an unintended use pattern, a fairness concern, or a broader governance issue. Feedback shall be prioritised based on the severity of the concern raised and the potential for the underlying issue to affect a wider population or to indicate a systemic problem.

The assessment process shall ensure that feedback is considered in conjunction with operational monitoring data and incident records, enabling the organisation to detect patterns and correlations that might not be apparent from any single source.

Acting on Feedback

The organisation shall establish a process for translating feedback assessments into defined actions. Actions may include changes to the AI system through the change management process; updates to operational procedures or user guidance; revisions to risk assessments or documentation; broader governance or policy changes; and, where feedback relates to individual complaints or challenges, direct responses to the individuals concerned.

All feedback assessed as indicating a significant issue shall have a defined owner and shall be tracked to resolution. The organisation shall maintain records of feedback received, assessments conducted, actions taken, and outcomes achieved.

Individual Redress

Where the AI system makes or informs decisions that affect individuals, the feedback process shall include a pathway for individuals to seek reconsideration of decisions or to raise concerns about the way in which the system has been applied to them. The organisation shall assess the adequacy of this pathway against applicable legal requirements, including any obligations arising from data protection or sector-specific legislation governing automated decision-making.

Continuous Improvement Cycle

The organisation shall conduct periodic reviews of feedback patterns, incident trends, and performance data to identify opportunities for systemic improvement. Improvement opportunities identified through these reviews shall be assessed against the organisation's AI objectives and risk management priorities, and decisions about which improvements to pursue shall be documented.

Improvements shall be implemented through appropriate governance channels — including the change management process for system-level changes and the management review process for governance and policy changes — ensuring that improvements are implemented in a controlled manner.

Reporting Feedback Outcomes

The outcomes of feedback handling, including the improvements made as a result of feedback, shall be reported to relevant governance functions. Where feedback has revealed significant issues or led to material changes, these shall be reflected in updates to risk assessments, documentation, and, where relevant, in external communications about the system.


Related Controls

  • A.7.5 – AI System Monitoring: Monitoring data shall be considered alongside feedback to provide a comprehensive picture of system performance and to identify patterns.
  • A.8.2 – AI System Incident Management: Feedback indicating significant adverse impacts shall be assessed for classification as an incident and managed accordingly.
  • A.7.6 – AI System Change Management: Improvements arising from the feedback process that involve changes to the AI system shall be managed through the change management process.
  • A.6.1.1 – AI System Impact Assessment: Feedback regarding adverse impacts shall be used to inform updates to the impact assessment and to verify whether anticipated impacts were accurately assessed.
  • A.5.4 – Human Oversight of AI Systems: Individual feedback and redress pathways support meaningful human oversight by providing mechanisms for individuals to challenge AI system outputs.