ISO 42001: 2023 - A.6.1.3 Processes for Responsible AI System Design and Development
This article provides guidance on how to implement the ISO 42001: 2023 A.6.1.3 Processes for Responsible AI System Design and Development
ISO 42001 Control Description
The organisation shall define and document the specific processes for the responsible design and development of the AI system.
Control Objective
To ensure that the organisation identifies and documents objectives and implements processes for the responsible design and development of AI systems.
Purpose
To establish documented, repeatable processes that operationalise responsible AI development, ensuring consistency across projects, enabling accountability, and systematically integrating responsible AI objectives into all design and development activities.
Guidance on Implementation
Defining Responsible Development Processes
Organisations should define and document comprehensive processes covering all aspects of responsible AI system design and development. These processes should include consideration of:
-
Lifecycle stages: Specify which lifecycle stages apply to the organisation's AI systems. Organisations can use the lifecycle model from ISO/IEC 22989, ISO/IEC 5338, or define their own stages appropriate to their context. Typical stages include: inception, design and development, verification and validation, deployment, operation and monitoring, continuous validation, re-evaluation, retirement.
-
Testing requirements and planned means: Define what testing is required at each stage, including functional testing, performance testing, fairness testing, security testing, safety testing, and robustness testing. Specify testing methodologies, tools, test data requirements, and acceptance criteria.
-
Human oversight requirements: Establish processes and tools for human oversight, especially when AI systems can impact natural persons. Define oversight mechanisms (human-in-the-loop, human-on-the-loop), decision escalation procedures, and override capabilities.
-
AI system impact assessments: Specify at which stages impact assessments should be performed (link to Control A.5.2). Typically, impact assessments occur during inception, before deployment, and periodically during operation. Define triggers for reassessment (significant changes, incidents, regulatory changes).
-
Training data expectations and rules: Establish rules governing data usage, including: what data can be used, approved data suppliers, data quality requirements, labeling standards and processes, data retention and disposal, and prohibited data sources.
-
Expertise requirements: Define subject matter domain knowledge, technical AI expertise, or training required for developers. Specify competence requirements for different roles (link to Control A.4.7).
-
Release criteria: Establish conditions that must be met before the AI system can proceed to the next lifecycle stage or be deployed. Release criteria should address functional requirements, performance thresholds, fairness metrics, security assessments, documentation completeness, and approval sign-offs.
-
Approvals and sign-offs: Designate who must approve at various stages. This might include technical leads, risk managers, legal counsel, ethics reviewers, senior management, or external parties.
-
Change control: Define processes for managing changes to AI systems, including change request procedures, impact assessment for changes, testing requirements for changes, approval workflows, and documentation of changes.
-
Usability and controllability: Establish processes for ensuring AI systems are usable and controllable by intended users, including user interface design, user testing, accessibility considerations, and control mechanisms.
-
Engagement of interested parties: Define when and how to engage stakeholders, including users, affected parties, domain experts, regulators, and civil society organisations. Specify engagement methods such as consultation, participatory design, feedback mechanisms, or advisory boards.
Process Documentation Components
For each defined process, documentation should include:
a) Process description: Step-by-step procedures explaining what activities are performed, in what sequence
b) Roles and responsibilities: Who performs each activity, who reviews, who approves (link to Control A.3.2)
c) Inputs and outputs: What information, data, or artifacts are required as inputs; what is produced as outputs
d) Tools and methods: Specific tools, techniques, frameworks, or methodologies to be used
e) Quality gates and decision points: Criteria for proceeding to next steps, decision-making authority, escalation procedures
f) Documentation requirements: What must be documented during the process, in what format, where it is stored
g) Review and approval mechanisms: How work products are reviewed, by whom, approval criteria
h) Integration with other processes: How the process connects with risk management, impact assessment, quality assurance, security, compliance
i) Metrics and monitoring: How process effectiveness is measured, key performance indicators
Process Variation by AI System Type
Processes may vary depending on:
- Functionality of the AI system (e.g., computer vision vs. natural language processing vs. predictive analytics)
- AI technologies used (e.g., deep learning vs. classical machine learning vs. symbolic AI)
- Risk level of the AI system (high-risk systems require more rigorous processes)
- Organisational role (AI developer vs. AI provider vs. AI deployer/user)
- Deployment context (safety-critical vs. non-critical applications)
Implementation Steps
Organisations should:
- Assess organisational context - Determine which processes are needed based on types of AI systems developed/used and organisational role
- Define process framework - Establish overall process structure aligned with adopted lifecycle model
- Detail specific processes - Document procedures for each lifecycle stage and cross-cutting process (testing, change control, etc.)
- Specify process integration - Define how AI development processes link to risk management (Control A.6.1), impact assessment (Control A.5.2), quality assurance, and operational processes
- Assign process ownership - Designate responsible persons or roles for each process
- Develop supporting documentation - Create templates, checklists, guidelines that support process execution
- Provide process training - Ensure development teams understand and can follow documented processes
- Implement supporting tooling - Provide tools needed to execute processes efficiently (development environments, testing tools, documentation systems)
- Monitor process compliance - Track adherence to defined processes through reviews, audits, metrics
- Review and improve processes - Periodically assess process effectiveness, identify improvement opportunities, update processes based on lessons learned
Key Considerations
Flexibility vs. rigor: Processes should provide sufficient control and consistency while allowing flexibility for the iterative, experimental nature of AI development. Avoid overly rigid processes that stifle innovation.
Process maturity: Tailor process formality to organisational maturity level and AI system risk. Early-stage organisations or low-risk systems may use lighter-weight processes; mature organisations or high-risk systems require more formal processes.
Alignment with objectives: Processes must enable achievement of responsible development objectives (Control A.6.1.2). Each process should clearly link to one or more objectives.
Integration, not isolation: AI development processes should integrate with existing organisational processes (project management, quality management, security) rather than operating in isolation.
Documentation maintenance: Processes are living documents that should evolve as the organisation learns and as AI technology advances. Establish regular review cycles.
Stakeholder input: Engage development teams, users, and other stakeholders when defining processes to ensure they are practical and effective.
Standards alignment: Consider alignment with ISO/IEC 5338 lifecycle processes, ISO 9241-210 human-centered design, and other relevant standards.
Related Controls
Within ISO/IEC 42001:
- A.6.1.2 Objectives for responsible development of AI system
- A.6.2.2 AI system requirements and specification
- A.6.2.3 Documentation of AI system design and development
- A.6.2.4 AI system verification and validation
- A.6.2.5 AI system deployment
- A.5.2 AI system impact assessment process
- Clause 8.1 Operational planning and control
Related Standards:
- ISO/IEC 5338 AI system lifecycle processes
- ISO 9241-210 Human-centered design for interactive systems