Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.5.4 Assessing AI System Impact on Individuals and Groups of Individuals

This article provides guidance on how to implement the ISO 42001: 2023 - A.5.4 Assessing AI System Impact on Individuals and Groups of Individuals

ISO 42001 Control Description

The organisation shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.

Control Objective

To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.

Purpose

To systematically identify and evaluate how AI systems may affect individual people and specific demographic or social groups, enabling the organisation to understand, mitigate, and manage impacts on human rights, dignity, autonomy, safety, privacy, fairness, and wellbeing.

Guidance on Implementation

Assessing Impacts on Individuals

The organisation should assess potential impacts on individuals who:

  • Use the AI system directly
  • Are subject to decisions made by or influenced by the AI system
  • Have their personal data processed by the AI system
  • Are indirectly affected by the AI system's operation

Assessing Impacts on Groups

The organisation should assess potential impacts on groups of individuals, including:

a) Demographic groups:
  • Based on protected characteristics (age, gender, race, ethnicity, disability)
  • Socioeconomic groups
  • Geographic or cultural communities
b) Vulnerable groups requiring specific protection:
  • Children
  • Elderly persons
  • Persons with disabilities (physical, cognitive, sensory impairments)
  • Workers (particularly those subject to AI-based management or monitoring)
  • Marginalised or underrepresented communities
c) Role-based groups:
  • Employees, customers, citisens, patients, students
  • Specific professional or occupational groups

Types of Impacts to Consider (ISO/IEC 42005 Clause 6.7, 6.8)

a) Human rights and dignity:
  • Respect for human dignity
  • Freedom and autonomy
  • Non-discrimination and equality
  • Privacy and data protection
  • Access to justice and effective remedy
b) Safety and physical wellbeing:
  • Physical harm or injury
  • Health impacts (mental and physical)
  • Safety risks from system failures
c) Fairness and non-discrimination:
  • Disparate treatment of individuals or groups
  • Disparate impact (neutral policies with discriminatory effects)
  • Bias in decision-making
  • Access inequality
d) Privacy:
  • Collection and use of personal data
  • Surveillance and monitoring
  • Profiling and automated decision-making
  • Right to be forgotten
e) Autonomy and agency:
  • Influence on decision-making
  • Manipulation or deception
  • Loss of human control or oversight
  • Dependency on AI systems
f) Economic impacts:
  • Employment and livelihood
  • Access to services and opportunities
  • Economic inequality
  • Labor rights and working conditions
g) Social and psychological impacts:
  • Social inclusion or exclusion
  • Stigmatisation or discrimination
  • Psychological wellbeing
  • Trust and confidence

Considerations for Assessment (ISO/IEC 42001 Annex B.5.4)

When assessing impacts, the organisation should:

a) Consider governance principles, AI policies, and objectives - Align assessment with organisational commitments

b) Evaluate trustworthiness expectations - Individuals using the system or whose data is processed have expectations about AI trustworthiness

c) Account for specific protection needs:

  • Children: Developmental vulnerability, limited capacity to consent
  • Elderly persons: Potential technological unfamiliarity, specific accessibility needs
  • Persons with disabilities: Accessibility requirements, potential for discriminatory treatment
  • Workers: Power imbalances, surveillance concerns, job security
d) Assess reasonably foreseeable impacts:
  • Both intended use and foreseeable misuse
  • Direct and indirect impacts
  • Short-term and long-term impacts
  • Immediate and cumulative effects

e) Evaluate means to address impacts - Identify mitigation measures and residual impacts after mitigation

Implementation Steps

  1. Identify affected individuals and groups - Determine who may be impacted by the AI system
  2. Engage stakeholders - Consult with representatives of affected groups (particularly important for vulnerable populations)
  3. Assess each impact type - Systematically evaluate potential impacts across all relevant categories
  4. Evaluate severity and likelihood - Determine significance of each impact
  5. Consider differential impacts - Assess whether impacts vary across different groups
  6. Identify mitigation measures - Determine actions to prevent, minimise, or address harms
  7. Document findings - Record assessment results per Control A.5.3
  8. Integrate with decision-making - Use impact findings to inform system design, deployment decisions, and ongoing management

Key Considerations

Participatory approaches: Engage affected individuals and communities in the assessment process. Those potentially impacted often have insights not apparent to developers or assessors.

Intersectionality: Individuals may belong to multiple groups (e.g., elderly woman with disability). Consider how impacts may be compounded for those at intersections of multiple characteristics.

Power imbalances: Pay particular attention to impacts on individuals or groups with less power relative to the organisation deploying the AI (e.g., employees, service recipients in monopoly contexts).

Cultural context: Impacts may vary across cultural, geographic, or social contexts. Assess impacts specific to deployment environments.

Evolving understanding: Impacts may only become apparent over time. Continuous monitoring and reassessment are essential.

Beyond compliance: While assessing compliance with legal requirements is important, also consider broader ethical and social impacts beyond minimum legal obligations.

Related Controls

Within ISO/IEC 42001:

  • A.5.2 AI system impact assessment process
  • A.5.3 Documentation of AI system impact assessments
  • A.5.5 Assessing societal impacts
  • A.9.4 Human oversight
  • A.9.5 Information for individuals and groups

Related Standards:

  • ISO/IEC 42005:2025 Clause 6.7 (detailed guidance on assessing impacts on individuals and groups)
  • ISO/IEC TR 24027:2021 (bias in AI systems)