Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.5.2 AI System Impact Assessment Process

This article provides guidance on how to implement the ISO 42001: 2023 - A.5.2 AI System Impact Assessment Process 

ISO 42001 Control Description

The organisation shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.

Control Objective

To assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle.

Purpose

To establish a systematic approach for identifying and evaluating how AI systems may affect people and society. Impact assessment enables organisations to understand potential harms and benefits, make informed decisions about AI deployment, implement mitigation measures, and demonstrate responsible AI practices.

Guidance on Implementation

Establishing the Process

The organisation should develop a structured, consistent approach for performing AI system impact assessments. The process should be documented and address:

  1. When assessments are performed - Triggers and timing (early in development, before deployment, during significant changes)
  2. Who performs assessments - Roles and responsibilities (reference ISO/IEC 42005 Clause 5.6)
  3. How assessments are conducted - Methodology and steps
  4. What is assessed - Scope and coverage (individuals, groups, societies)
  5. How results are used - Integration with decision-making and risk management

Process Elements (ISO/IEC 42005 Clause 5)

The impact assessment process should include:

a) Integration with organisational processes
  • Link with risk management (Clause 6.1.2)
  • Align with AI policy and objectives
  • Coordinate with existing impact assessment processes (DPIA, EQIA, SIA)
b) Timing of assessments
  • Early stage (conception/design)
  • Pre-deployment
  • Post-deployment (ongoing monitoring)
  • When significant changes occur
c) Scope definition
  • Which AI systems require assessment
  • Geographic and demographic scope
  • Lifecycle stages covered
d) Responsibility allocation
  • Who conducts assessments
  • Who reviews and approves
  • Who implements mitigation measures
e) Thresholds and triggers
  • Criteria for high-risk AI systems
  • Sensitive use cases requiring enhanced assessment
  • Impact severity scales
f) Performing the assessment
  • Identify affected stakeholders
  • Assess reasonably foreseeable impacts (positive and negative)
  • Consider intended use and foreseeable misuse
  • Evaluate severity and likelihood
g) Analysing results
  • Determine significance of impacts
  • Prioritise mitigation needs
  • Identify residual impacts
h) Recording and reporting
  • Document assessment results (Control A.5.3)
  • Report to management and relevant stakeholders
  • Maintain transparency while protecting confidentiality
i) Approval process
  • Define approval requirements
  • Specify approval authority
  • Document approval decisions
j) Monitoring and review
  • Periodic reassessment
  • Trigger-based reviews (incidents, changes, complaints)
  • Continuous monitoring of impacts

Integration with Risk Management

Impact assessments should inform AI risk assessment and treatment (ISO/IEC 42001 Clause 6.1.2). Identified impacts help determine:

  • What risks exist
  • Severity of risks
  • Appropriate risk treatment measures
  • Monitoring requirements

ISO/IEC 23894 describes how impact analysis integrates with risk management processes.

Stakeholder Engagement

The process should include engagement with:

  • Affected individuals and communities
  • Subject matter experts
  • Ethics advisors
  • Legal and compliance teams
  • Domain specialists

Implementation Steps

Organisations should:

  1. Document the process - Create procedural guidance, templates, and tools (ISO/IEC 42005 Clause 5.2)
  2. Define methodology - Specify assessment techniques (interviews, surveys, modeling, participatory design)
  3. Assign responsibilities - Designate who conducts, reviews, and approves assessments
  4. Establish timing - Define when assessments occur in the AI system lifecycle
  5. Create templates - Develop standardised documentation formats (ISO/IEC 42005 Annex E provides example template)
  6. Train personnel - Ensure assessors have necessary competence
  7. Integrate with governance - Link to AI policy, objectives, and risk management
  8. Pilot and refine - Test the process on initial AI systems and improve based on lessons learned

Key Considerations

Proportionality: The depth and rigor of impact assessment should be proportionate to the risk level and potential impacts of the AI system. High-risk systems require more comprehensive assessment.

Lifecycle application: Assessments are not one-time activities. They should occur throughout the AI system lifecycle as understanding of impacts evolves.

Existing processes: Organisations may leverage existing impact assessment processes (Data Protection Impact Assessment, Equality Impact Assessment, Social Impact Assessment) if they adequately address AI-specific considerations.

Multidisciplinary approach: Effective impact assessment requires diverse perspectives - technical, ethical, legal, social, domain-specific.

Regulatory alignment: Some jurisdictions require impact assessments for AI systems (e.g., EU AI Act Article 27). Ensure the process meets applicable legal requirements.

Related Controls

Within ISO/IEC 42001:

  • A.5.3 Documentation of AI system impact assessments
  • A.5.4 Assessing impact on individuals and groups
  • A.5.5 Assessing societal impacts
  • Clause 6.1.4 AI system impact assessment
  • Clause 8.4 AI system impact assessment (operational requirement)

Related Standards:

  • ISO/IEC 42005:2025 (comprehensive guidance on impact assessment)
  • ISO/IEC 23894:2023 (risk management integration)