Skip to content
  • There are no suggestions because the search field is empty.

ISO 42001: 2023 - A.6.1.2 Objectives for Responsible Development of AI System

This article provides guidance on how to implement the ISO 42001: 2023 -A.6.1.2 Objectives for Responsible Development of AI System

ISO 42001 Control Description

The organisation shall identify and document objectives to guide the responsible development AI systems, and take those objectives into account and integrate measures to achieve them in the development life cycle.

Control Objective

To ensure that the organisation identifies and documents objectives and implements processes for the responsible design and development of AI systems.

Purpose

To establish clear, documented objectives that guide responsible AI system development and ensure these objectives are actively integrated throughout the development lifecycle through specific measures, not merely documented as aspirational statements. This control ensures responsible AI principles translate into concrete development practices.

Guidance on Implementation

Identifying Objectives for Responsible Development

The organisation should identify objectives (reference ISO/IEC 42001 Clause 6.2) that affect AI system design and development processes. These objectives should:

- Align with organisational AI policy (Control A.2.2) and overall strategic direction

- Reflect risk management priorities based on AI system impact assessments and risk assessments

- Address trustworthy AI principles such as fairness, transparency, accountability, safety, privacy, security

- Meet legal and regulatory requirements applicable to the AI system

- Respond to stakeholder expectations from users, affected parties, customers, regulators

- Support organisational values and ethical commitments

Examples of Responsible Development Objectives

Organisations may establish objectives such as:

  • Fairness: Minimise unwanted bias and ensure equitable treatment across demographic groups

  • Transparency: Enable explainability of AI system decisions and operations

  • Safety: Prevent physical, psychological, or economic harm to individuals

  • Privacy: Protect personal data and respect data subject rights

  • Security: Resist adversarial attacks and prevent unauthorised manipulation

  • Robustness: Maintain reliable performance across operational conditions

  • Accountability: Enable traceability of decisions and clear lines of responsibility

  • Sustainability: Minimise environmental impact of AI system operation

  • Human autonomy: Preserve human agency and oversight in decision-making

Integrating Objectives Throughout the Development Lifecycle

Objectives must be incorporated at every lifecycle stage, not treated as separate concerns. For example, if "fairness" is an objective:

Requirements specification stage:

  • Define fairness requirements specific to the application domain
  • Specify protected characteristics that must not lead to discrimination
  • Set acceptable bounds for performance variation across groups

Data acquisition stage:

  • Select data sources that provide diverse, representative samples
  • Document data provenance to assess potential bias sources
  • Acquire data that enables fairness testing

Data conditioning stage:

  • Apply bias detection and mitigation techniques
  • Ensure data labeling processes don't introduce bias
  • Balance datasets where appropriate

Model training stage:

  • Use algorithms that support fairness objectives (e.g., fairness-aware machine learning)
  • Apply constraints during training to limit disparate outcomes
  • Monitor fairness metrics during training

Verification and validation stage:

  • Test model for disparate impact across demographic groups
  • Validate fairness using appropriate metrics for the context
  • Test for fairness under various operational conditions

Deployment stage:

  • Implement monitoring for fairness degradation
  • Establish thresholds triggering alerts or intervention
  • Provide transparency about fairness considerations to users

Operation and monitoring stage:

  • Continuously monitor fairness metrics
  • Investigate fairness concerns reported by users
  • Retrain or adjust when fairness degrades

Providing Requirements and Guidelines

Organisations should translate high-level objectives into specific requirements and guidelines that development teams can follow. This includes:

  • Specifying methods and tools: For a fairness objective, specify which fairness testing tools must be used (e.g., AI Fairness 360, Fairlearn)

  • Defining metrics: Specify which fairness metrics apply (e.g., demographic parity, equalised odds, equal opportunity)

  • Setting thresholds: Define acceptable levels (e.g., "error rate difference between demographic groups shall not exceed 5%")

  • Prescribing processes: Document procedures for addressing objective-related issues when they arise

  • Assigning responsibilities: Clarify who is accountable for achieving objectives at each stage (link to Control A.3.2)

  • Providing training: Ensure development personnel understand objectives and how to achieve them (link to Control A.4.7)

Implementation Steps

Organisations should:

  1. Review organisational AI objectives established in Clause 6.2 AI objectives and planning
  2. Identify development-specific objectives relevant to the particular AI system being developed, considering its risk level and application domain
  3. Document objectives clearly in language understandable to development teams, making them measurable where practical
  4. Translate objectives into lifecycle requirements specifying what must be done at each development stage to achieve objectives
  5. Specify methods, tools, and metrics to be used for achieving and measuring objectives
  6. Communicate objectives to development teams and all relevant stakeholders involved in development
  7. Integrate into development processes (see Control A.6.1.3) ensuring objectives are embedded in standard procedures
  8. Provide training and support to ensure teams have competence to implement objective-related measures
  9. Monitor compliance throughout development through reviews, checkpoints, and audits
  10. Verify achievement of objectives before proceeding to deployment (link to Control A.6.2.4)

Key Considerations

a) Measurability: Where practical, define objectives in measurable terms to enable verification. However, recognise some objectives (e.g., human dignity) may be qualitative.

b) Trade-offs: Some objectives may conflict (e.g., transparency may reduce performance; fairness across different groups may require different treatment). Document how trade-offs are evaluated and resolved.

c) Context-specificity: Objectives should reflect the specific AI system's risk level, application domain, and deployment context. High-risk systems require more stringent objectives.

d) Lifecycle continuity: Objectives are not "checked off" at one stage. They require continuous attention throughout development and into operation.

e) Resource implications: Achieving responsible development objectives requires resources - competent personnel, appropriate tools, sufficient time. Plan accordingly.

f) Documentation and evidence: Maintain records demonstrating how objectives influenced development decisions and what measures were taken to achieve them.

g) Stakeholder engagement: Engage affected parties and domain experts when defining objectives to ensure they address real concerns.

Related Controls

Within ISO/IEC 42001:

  • A.2.2 AI policy
  • A.6.1.3 Processes for responsible AI system design and development
  • A.6.2.2 AI system requirements and specification
  • A.6.2.4 AI system verification and validation
  • A.9.3 Objectives for responsible use of AI systems
  • Clause 6.2 AI objectives and planning to achieve them
  • Annex C Examples of organisational objectives for managing risk

Related Standards:

  • ISO/IEC 5338 AI system lifecycle processes