ISO 42001: 2023 - A.2.2 AI Policy
This article provides guidance on how to implement the ISO 42001:2023 control A.2.2 AI Policy
ISO 42001 Control Description
The organization shall document a policy for the development or use of AI systems.
Control Objective
To provide management direction and support for AI systems according to business requirements.
Purpose
To establish clear, documented management direction on how AI systems should be developed, deployed, and used within the organization. The AI policy provides the foundation for all AI-related activities, demonstrates top management commitment, and ensures alignment between AI initiatives and organizational strategy, legal obligations, and stakeholder expectations.
Guidance on Implementation
What Should Inform the AI Policy
The AI policy should be informed by:
a) Business strategy - Alignment with organisational objectives and strategic direction
b) Organisational values and culture - Risk appetite and tolerance levels for AI-related risks
c) Risk environment - Level of risk posed by the AI systems being developed or used
d) Legal requirements - Applicable regulations, statutes, contracts, and compliance obligations
e) Impact to interested parties - Considerations from AI system impact assessments (ISO/IEC 42001 Clause 6.1.4)
What the AI Policy Should Include
In addition to requirements in ISO/IEC 42001 Clause 5.2, the AI policy should include:
a) Guiding principles for all AI-related activities, such as:
- Commitment to human oversight and accountability
- Fairness and non-discrimination
- Transparency and explainability appropriate to context
- Privacy and data protection
- Safety and security
- Reliability and accuracy
- Societal and environmental wellbeing
b) Definition and scope of what constitutes an AI system within the organization (reference ISO/IEC 22989 definitions)
c) AI objectives or framework for setting objectives (aligned with Clause 6.2)
d) Commitment to meeting applicable AI-related requirements (legal, regulatory, contractual, ethical)
e) Commitment to continual improvement of the AI management system
f) Assignment of AI-related roles and responsibilities at appropriate levels
g) Processes for handling deviations and exceptions to the policy
h) Need to perform AI system impact assessments as required by Clause 6.1.4
Topic-Specific Aspects to Address
The AI policy should consider topic-specific aspects or provide cross-references to other policies covering:
- AI resources and assets - How resources for AI systems are managed
- AI system impact assessments - Process and requirements (Clause 6.1.4)
- AI system development - Approach to responsible development and procurement
- Data governance - How data for AI systems is acquired, managed, and disposed
- Model governance - How AI models are developed, validated, and monitored
- Third-party AI - Use of externally developed AI systems and services
Implementation Steps
Organisations should:
- Engage stakeholders - Involve technical teams, legal, compliance, business units, and relevant interested parties in policy development
- Consider organizational context - Review AI use cases, risk profile, regulatory environment, and strategic objectives
- Align with existing policies - Ensure consistency with information security policy, data protection policy, quality policy, and other relevant organizational policies (see Control A.2.3)
- Obtain top management approval - The AI policy must be approved by top management per Clause 5.2
- Communicate effectively - Make the policy accessible to all relevant personnel and interested parties in formats they can understand
- Establish review process - Define review intervals and triggers (see Control A.2.4)
Scope Considerations
The policy should clearly address:
- Which systems are in scope - Clear criteria for what falls under "AI systems"
- Organisational roles - Whether organization is AI developer, provider, user, or combination (see ISO/IEC 22989 Section 5.19)
- Lifecycle coverage - Policy applies throughout AI system lifecycle (inception to retirement)
- Risk-based approach - Higher-risk AI systems may require more detailed policy provisions
Key Considerations
Policy development: Avoid creating an AI policy in isolation - ensure it's informed by actual or planned AI activities and reflects genuine organizational needs, not just compliance box-ticking.
Integration: The AI policy should integrate with and reference the organization's broader policy framework, not exist as a standalone document disconnected from other management systems.
Clarity: Use clear language that is understandable to all relevant stakeholders, not just AI specialists. The policy should be actionable and provide meaningful direction.
Confidentiality: If the AI policy is shared externally with partners, suppliers, or customers, ensure that confidential information about specific AI systems or competitive advantages is not disclosed.
Related Controls
Within ISO/IEC 42001:
- A.2.3 Alignment with other organizational policies
- A.2.4 Review of the AI policy
- A.3.2 AI roles and responsibilities
- A.5.2 AI system impact assessment process
Integration with ISO 27001 (if applicable):
- A.5.1 Policies for information security