What are good Artificial Intelligence objectives?
Setting AI objectives that support responsible development, use, and continual improvement
Artificial Intelligence Management System (AIMS) objectives help organisations define what they want to achieve when governing, developing, using, and monitoring AI systems. Clear objectives demonstrate commitment to responsible AI practices and support continual improvement of AI governance, oversight, and risk management.
AIMS objectives should be measurable, achievable, and aligned with your AI Policy, identified AI risks, and organisational priorities. Objectives should consider both the responsible development of AI systems and their responsible use in operation. They should be reviewed regularly as part of management review.
It’s good practice to have separate Key Performance Indicators (KPIs) for your annual AI Management System (AIMS) objectives. Separate KPIs help you measure the progress and effectiveness of each objective individually, ensuring that you’re on track to achieve your goals. They also provide clarity and focus for your team, allowing you to identify areas that may need more attention or resources. Review this article on creating and tracking AI KPIs.
Examples of Artificial Intelligence Objectives:
Strengthen AI Governance and Oversight
Objective:
Ensure all AI systems have appropriate human oversight mechanisms defined and documented.
Plan of Action:
Review all AI systems to confirm oversight requirements are identified. Document oversight mechanisms and responsibilities in the AI Register and review them periodically.
Target:
100% of active AI systems have documented human oversight arrangements by the end of [the reporting period].
Why:
Clear oversight ensures accountability for AI decisions and reduces the risk of unintended or harmful outcomes.
Improve AI Risk Assessment Coverage
Objective:
Ensure AI System Impact Assessments are completed for all AI systems prior to deployment or significant change.
Plan of Action:
Integrate AI System Impact Assessments into project initiation and change management processes. Provide guidance and training on when and how assessments must be completed.
Target:
AI System Impact Assessments completed for all new or materially changed AI systems.
Why:
Early identification of AI-related risks supports informed decision-making and effective risk treatment.
Increase Transparency of AI Use
Objective:
Improve transparency about how and when AI systems are used.
Plan of Action:
Identify user-facing AI systems and define appropriate transparency information for each. Make this information available through suitable communication channels.
Target:
Transparency information defined and available for all user-facing AI systems by year-end.
Why:
Transparency builds trust with users and interested parties and supports responsible use of AI systems.
Enhance Explainability of AI Outputs
Objective:
Ensure AI system outputs are explainable and traceable where required.
Plan of Action:
Define explainability requirements for relevant AI systems and document how outputs can be reviewed, traced, or challenged.
Target:
Explainability requirements documented for all high-impact AI systems within the next 12 months.
Why:
Explainability supports accountability, auditability, and the ability to review or contest AI-driven decisions.
Strengthen AI Lifecycle Controls
Objective:
Improve consistency of AI lifecycle management from design through retirement.
Plan of Action:
Define lifecycle stages for AI systems and ensure documentation and controls are applied at each stage in line with organisational policies.
Target:
Lifecycle documentation completed and maintained for all AI systems by the next management review.
Why:
Effective lifecycle controls help ensure AI systems remain safe, effective, and aligned with organisational objectives over time.
Improve Monitoring of AI Performance and Behaviour
Objective:
Enhance monitoring of AI system performance and potential degradation.
Plan of Action:
Identify monitoring requirements for AI systems and carry out regular reviews of performance, outputs, and anomalies.
Target:
Monitoring mechanisms defined and applied to all production AI systems within the next reporting cycle.
Why:
Ongoing monitoring helps detect drift, errors, or unexpected behaviour early.
Improve AI Incident Management
Objective:
Strengthen identification and response to AI-related incidents.
Plan of Action:
Ensure AI-related incidents are recognised, reported, and managed in line with the Incident Management Plan. Review incidents to identify trends and improvement opportunities.
Target:
All AI-related incidents identified and managed in accordance with the Incident Management Plan.
Why:
Effective incident handling reduces harm and supports continual improvement of AI systems and controls.
Increase AI Competence and Awareness
Objective:
Improve staff awareness and competence related to responsible AI use.
Plan of Action:
Identify staff involved in AI activities and provide appropriate training or awareness sessions. Monitor completion and refresh training as required.
Target:
All staff involved in AI activities complete AI governance or awareness training within the next 6 months.
Why:
Competent staff are better equipped to manage AI risks and use AI systems responsibly.