Closed consultation

AI Management Essentials tool (accessible)

Published 6 November 2024

Internal Processes

1. AI system record

We maintain a complete and up-to-date record of all the AI systems our organisation develops and uses.

1.1 Do you maintain a record of the AI systems your organisation develops and uses?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 1.2.

If b, then skip to next section.

AI system record: an inventory of documentation, assets and resources related to your AI systems. This may encompass, but is not limited to, content referenced throughout this self-assessment, including: technical documentation; impact and risk assessments; AI model analyses; and data records. In practice, an AI system record may take the form of a collection of files on your organisation鈥檚 shared drive, information distributed across an enterprise management system, or resources curated on an AI governance platform.

1.2 What proportion of the AI systems that you develop and use are documented in your AI system record?

a.听聽聽 All

b.听聽聽 The majority

c. 聽聽聽 Some

1.3 Do you have an established process for adding new systems to your AI system record?

a.听聽聽 Yes

b.听聽聽 No

1.4 If you procure or access AI systems from third party providers, do you request and receive documentation, assets and resources for your AI system record from them?

a.听聽聽 Yes, always

b.听聽聽 Yes, sometimes

c.听聽聽 No

1.5 How frequently do you review and update your AI system record?

a.听聽聽 Twice a year or more

b.听聽聽 Once a year

c.听聽聽 Less than once a year

2. AI policy

We have a clear, accessible and suitable AI policy for our organisation.

2.1 Do you have an AI policy for your organisation?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 2.2.

If b, then skip to next section.

AI policy: information that provides governance direction and support for AI systems according to your business requirements. Your AI policy may include but is not limited to: principles and rules that guide AI-related activity within your organisation; frameworks for setting AI-related objectives; and assignments of roles and responsibilities for AI management.

2.2 Is your AI policy available and accessible to all employees?

a.听聽聽 Yes

b.听聽聽 No

2.3 Does your AI policy help users evaluate whether the use of an AI is appropriate for a given function or task?

a.听聽聽 Yes

b.听聽聽 No

2.4 Does your AI policy identify clear roles and responsibilities for AI management processes in your organisation?

a.听聽聽 Yes

b.听聽聽 No

2.5 How frequently do you review and update your AI policy?

a.听聽聽 Twice a year or more

b.听聽聽 Once a year

c.听聽聽 Less than once a year

3. Fairness

We seek to ensure that the AI systems we develop and use which directly impact individuals are fair.

3.1 Do you develop or use AI systems that directly impact individuals?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 3.2.

If b, then skip to next section.

Direct impact: we encourage you to judge 鈥榙irectness鈥 of impact in the context of your own organisational activities. As a starting point, we suggest that the following categories of AI systems should be considered to have direct impact on individuals:

  1. AI systems that are used to make decisions about people (e.g. profiling algorithms);
  2. AI systems that process data with personal or protected characteristic attributes (e.g. forecasting or entity resolution algorithms that utilise demographic data or personal identifiers);
  3. AI systems where individuals impacted by the system output are also the end-users (e.g. chatbots, image generators).

3.2 Do you have clear definitions of fairness with respect to these AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 3.3.

If b, then continue to 3.3.

If c, then skip to next section.

Fairness: a broad principle embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.

Section 7 focuses further on bias mitigation. We differentiate unfairness from bias, where bias is statistical phenomenon that is characteristic of a process such as decision-making, and unfairness is an outcome of a biased process being implemented in the real world.

3.3 Do you have mechanisms for detecting or identifying unfair outcomes or processes with respect to these AI systems and your definitions of fairness?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

3.4 Do you have processes for monitoring fairness of AI systems over time and mitigating against unfairness?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

3.5 How frequently do you review your process(es) for detecting and mitigating unfairness?

a.听聽聽 Twice a year or more

b.听聽聽 Once a year

c.听聽聽 Less than once a year

Managing risks

4. Impact assessment

We have identified and documented the possible impacts of the AI systems our organisation develops and uses.

4.1 Where appropriate, do you have an impact assessment process for identifying how your AI systems might impact鈥

a.听聽聽 Yes

b.听聽聽 No

4.1.2 The physical or psychological wellbeing of individuals?

a.听聽聽 Yes

b.听聽聽 No

4.1.3 Universal human rights?

a.听聽聽 Yes

b.听聽聽 No

4.1.4 Societies and the environment?

a.听聽聽 Yes

b.听聽聽 No

If a to any of 4.1, continue to 4.2.

If b to all of 4.1, skip to next section.

AI impact assessment: a framework used to consider and identify the potential consequences of an AI system鈥檚 deployment, intended use and foreseeable misuse.

4.2 Do you document potential impacts of your AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

4.3 Do you communicate the potential impacts to the users or customers of your AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

5. Risk assessment

We effectively manage any risks caused by our AI systems.

5.1 Do you conduct risk assessments of the AI systems you develop and use?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 5.1.

If b, then continue to 5.1.

If c, then skip to 5.3.1.

AI risk assessment: a framework used to consider and identify a range of potential risks that might arise from the development and/or use of an AI system. These include bias, data protection and privacy risks, risks arising from the use of a technology (e.g. the use of a technology for misinformation or other malicious purposes) and reputational risk to the organisation.

5.2.1 Are your risk assessments designed to produce consistent, valid and comparable results?

a.听聽聽 Yes

b.听聽聽 No

5.2.2 Do you compare the results of your risk assessments to your organisation鈥檚 overall risk thresholds?

a.听聽聽 Yes

b.听聽聽 No

5.2.3 Do you use the results of your risk assessment to prioritise risk treatment?

a.听聽聽 Yes

b.听聽聽 No

5.3.1 Do you monitor all your AI systems for general errors and failures?

a.听聽聽 Yes

b.听聽聽 No

5.3.2 Do you monitor all your AI systems to check that they are performing as expected?

a.听聽聽 Yes

b.听聽聽 No

5.4 Do you have processes for responding to or repairing system failures?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 5.5.

If b, then continue to 5.5.

If c, then skip to 5.6.

5.5 Have you defined risk thresholds or critical conditions under which it would become necessary to cease the development or use of your AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

5.6 Do you have a plan to introduce necessary updates to your risk assessment process as your AI systems evolve or critical issues are identified?

a.听聽聽 Yes

b.听聽聽 No

6. Data management

We responsibly manage the data used to train, fine-tune and otherwise develop our AI systems.

6.1 Do you train, fine-tune or otherwise develop your own AI systems using data?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 6.2.

If b, then skip to 6.6.

6.2 Do you document details about the provenance and collection processes of data used to develop your AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Data provenance: information about the creation, updates and transfer of control of data.

6.3 Do you ensure that the data used to develop your AI systems meet any data quality requirements defined by your organisation?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Data quality: broadly, the suitability of data for a specific task, or the extent to which the characteristics of data satisfy needs for use under specific conditions. Further information can be found on the government data quality hub.

6.4 Do you ensure that datasets used to develop your AI systems are adequately complete and representative?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Data completeness: the extent to which a dataset captures all the necessary elements for use under specific conditions. In practice, ensuring data completeness may involve replacing missing data with substituted values or removing data points that may compromise the accuracy or consistency of the AI system it is used to develop.

Data representativeness: the extent to which a data sample distribution corresponds to a target population. In practice, ensuring data representativeness may involve undertaking and responding to statistical data analysis that quantifies how closely your sample data reflects the characteristics of a larger group of subjects, or analysis of data sampling and collection techniques.

6.5 Do you document details about the data preparation activities undertaken to develop your AI systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Data preparation: includes any processing or transformation performed on a dataset prior to training or development of an AI system. In practice, this may include, but is not limited to: any process used to ensure data quality, completeness and representativeness, converting or encoding dataset features, feature scaling or normalisation, or labelling target variables.

6.6 Do you sign and retain written contracts with third parties that process personal data on your behalf?

a.听聽聽 Yes, always

b.听聽聽 Yes, sometimes

c.听聽聽 No

7. Bias mitigation

We mitigate against foreseeable, harmful and unfair algorithmic and data biases in our AI systems.

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Bias: the disproportionate weighting towards a particular subset of data subjects. Whilst bias is not always negative, it can cause a systematic skew in decision-making that results in unfair outcomes, perpetuating and amplifying negative impacts on certain groups.

7.2 If you procure AI as a Service (AIaaS) or pretrained AI systems from third party providers to use or develop upon, do you have records of the full extent of the data that has been used to train these systems?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 7.3.

If b, then continue to 7.3.

If c, then skip to 7.4.

AI as a Service: a service that outsources a degree of your AI system functionality to a third party. AIaaS 聽are often delivered as 鈥榦ff-the-shelf鈥 solutions with supporting infrastructure such as online platforms and APIs that allow for easy integration into existing business operations. Cloud-based AI software and applications provided by large tech companies are archetypal examples of AIaaS.

Pre-trained: refers to machine learning systems that have been initialised by training on a large, general dataset, and can be fine-tuned to accomplish specific downstream tasks.

7.3 If you procure AIaaS or pretrained AI systems from third party providers, do you conduct appropriate due diligence on the data used to train or develop these systems to mitigate against foreseeable harmful or unfair bias?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Due diligence: this may include requesting and reviewing the results of bias audits conducted by the developer of the 鈥榦ff-the-shelf鈥 AI system to determine if there is unfair bias in the input data, and/or the outcome of decisions or classifications made by the system.

7.4 Do you have processes to ensure compliance with relevant bias mitigation measures stipulated by international or domestic regulation?

a.听聽聽 Yes

b.听聽聽 No

8. Data protection

We have a 鈥渄ata protection by design and default鈥 approach throughout the聽development and use of our AI systems.

8.1 Do you implement appropriate security measures to protect the data used and/or generated by your AI systems?

a.听聽聽 Yes

2.听聽聽 No

Data protection security measures: see under UK GDPR for a further information.

8.2 Do you record all your personal data breaches?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 8.3.

If b, then skip to 8.4.

8.3 Do you report personal data breaches to affected data subjects when necessary?

a.听聽聽 Yes

b.听聽聽 No

8.4 Do you routinely complete Data Protection Impact Assessments (DPIAs) for uses of personal data that are likely to result in high risk to individuals鈥 interests?

a.听聽聽 Yes

b.听聽聽 No

Data Protection Impact Assessment: see for further information.

8.5 Have you ensured that all your AI systems and the data they use or generate is protected from interference by third parties?

a.听聽聽 Yes

b.听聽聽 No

Communication

9. Issue reporting

We have reporting mechanisms for employees, users and external third parties to report any failures or negative impacts of our AI systems.

9.1 Do you have reporting mechanisms for all employees, users and external third parties to report concerns or system failures?

a.听聽聽 Yes

b.听聽聽 No

If a, then continue to 9.2.

If b, then skip to 9.4.

9.2 Do you provide reporters with options for either anonymity or confidentiality or both?

a.听聽聽 Yes

b.听聽聽 No

Anonymity: in practice, providing anonymity requires excluding any personal data collection from the reporting procedure.

Confidentiality: in practice, providing confidentiality requires preventing anyone other than the intended recipient from connecting individual reports to a reporter.

9.3 Have you identified who in your organisation will be responsible for addressing concerns when they are escalated?

a.听聽聽 Yes

b.听聽聽 No

9.4 Are your reporting procedures meaningfully transparent for all employees, users and external third parties?

a.听聽聽 Yes

b.听聽聽 No

Transparency: refers to the communication of appropriate information about an聽AI聽system to relevant people, in a way that they understand. In practice, making reporting procedures transparent requires clearly informing reporters about: how they can expect their report to be processed; how their report is processed; when their report has finished being processed; and any outcomes to which the report can be directly attributed.

9.5 Do you respond to concerns in a timely manner?

a.听聽聽 Yes

b.听聽聽 No

Timely: timeliness is subjective and will depend on the nature of your organisation and concerns. As a rule of thumb, you could consider 鈥渢imely鈥 to mean no more than 72 hours. This is the amount of time in which you are required to report a data breach after becoming aware of it under GDPR.

9.6 Do you document all reported concerns and results of any subsequent investigations?

a.听聽聽 Yes

b.听聽聽 No

10. Third party communication

We tell every interested party how to use our AI systems safely聽and what the systems鈥 requirements are.

10.1 Have you determined what AI system technical documentation is required by interested parties across your relevant stakeholder categories (e.g. developers, AI system end-users, academic researchers, etc)?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 10.2.

If b, then continue to 10.2.

If c, then skip to 10.3.

Technical documentation: a written description of or guide to an AI system鈥檚 functionality. For instance, technical documentation content may include: usage instructions; technical assumptions about its use and operation; system architecture; and technical limitations. Manuals, code repositories and model cards are examples of technical documentation.

10.2 Do you provide technical documentation to interested parties in an appropriate format?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Appropriate format: broadly, this means that documentation is tailored to your interested parties鈥 needs and expected level of understanding.

10.3 Have you determined what AI system non-technical documentation is required by interested parties across your relevant stakeholder categories?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

If a, then continue to 10.4.

If b, then continue to 10.4.

If c, then stop.

Non-technical documentation: a written description or analysis of the benefits or issues associated with the use of an AI system outside of its operational processes. Impact assessments and risk assessments are examples of non-technical documentation.

10.4 Do you provide non-technical information to your users and other relevant parties?

a.听聽聽 Yes, for all

b.听聽聽 Yes, for some

c.听聽聽 Not for any

Annex A: Glossary

AI as a Service (AIaaS): a service that outsources a degree of your AI system functionality to a third party. AIaaS聽 are often delivered as 鈥榦ff-the-shelf鈥 solutions with supporting infrastructure such as online platforms and APIs that allow for easy integration into existing business operations. Cloud-based AI software and applications provided by large tech companies are archetypal examples of AIaaS.听

AI impact assessment: a framework used to consider and identify the potential consequences of an AI system鈥檚 deployment, intended use and foreseeable misuse.听

AI management system: the set of governance elements and activities within an organisation that support decision making and the delivery of outcomes relating to the development and use of AI systems. This includes organisational policies, objectives, and processes among other things. More information on assuring AI governance practices can be found in DSIT鈥檚 Introduction to AI Assurance.听

AI policy: information that provides governance direction and support for AI systems according to your business requirements. Your AI policy may include but is not limited to: principles and rules that guide AI-related activity within your organisation; frameworks for setting AI-related objectives; and assignments of roles and responsibilities for AI management.听聽聽

AI risk assessment: a framework used to consider and identify a range of potential risks that might arise from the development and/or use of an AI system. These include bias, data protection and privacy risks, risks arising from the use of a technology (e.g. the use of a technology for misinformation or other malicious purposes) and reputational risk to the organisation.听

AI systems: products, tools, applications or devices that utilise AI models to help solve problems. AI systems are the operational interfaces to AI models 鈥 they incorporate technical structures and processes that allow models to be used by non-technologists. More information on how AI systems relate to AI models and data can be found in DSIT鈥檚 Introduction to AI Assurance.听

AI system record: an inventory of documentation, assets and resources related to your AI systems. This may encompass, but is not limited to, content referenced throughout this self-assessment, including: technical documentation; impact and risk assessments; model analyses; and data records. In practice, an AI system record may take the form of a collection of files on your organisation鈥檚 shared drive, information distributed across an enterprise management system, or resources curated on an AI governance platform.听

Anonymity: in practice, providing anonymity requires excluding any personal data collection from the reporting procedure.听

Confidentiality: in practice, providing confidentiality requires preventing anyone other than the intended recipient from connecting individual reports to a reporter.听

Bias: the disproportionate weighting towards a particular subset of data subjects. Whilst bias is not always negative, it can cause a systematic skew in decision-making that results in unfair outcomes, perpetuating and amplifying negative impacts on certain groups.听

Data completeness: the extent to which a dataset captures all the necessary elements for use under specific conditions. In practice, ensuring data completeness may involve replacing missing data with substituted values or removing data points that may compromise the accuracy or consistency of the AI system it is used to develop.听

Data representativeness: the extent to which a data sample distribution corresponds to a target population. In practice, ensuring data representativeness may involve undertaking and responding to statistical data analysis that quantifies how closely your sample data reflects the characteristics of a larger group of subjects, or analysis of data sampling and collection techniques.听

Data preparation: includes any processing or transformation performed on a dataset prior to training or development of an AI system. In practice, this may include, but is not limited to: any process used to ensure data quality, completeness and representativeness, converting or encoding dataset features, feature scaling or normalisation, or labelling target variables.听聽

Data Protection Impact Assessment: see for further information.听聽

Data protection security measures: see under UK GDPR for a further information.听

Data provenance: information about the creation, updates and transfer of control of data.听

Data quality: broadly, the suitability of data for a specific task, or the extent to which the characteristics of data satisfy needs for use under specific conditions. Further information can be found on the government data quality hub.听

Direct impact: we encourage you to judge 鈥榙irectness鈥 of impact in the context of your own organisational activities. As a starting point, we suggest that the following categories of AI systems should be considered to have direct impact on individuals:聽

  1. AI systems that are used to make decisions about people (e.g. profiling algorithms);聽
  2. AI systems that process data with personal or protected characteristic attributes (e.g. forecasting or entity resolution algorithms that utilise demographic data or personal identifiers);聽
  3. AI systems where individuals impacted by the system output are also the end-users (e.g. chatbots, image generators).听

Fairness: a broad principle embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people. We differentiate unfairness from bias, where bias is statistical phenomenon that is characteristic of a process such as decision-making, and unfairness is an outcome of a biased process being implemented in the real world.听

Non-technical documentation: a written description or analysis of the benefits or issues associated with the use of an AI system outside of its operational processes. Impact and risk assessments are examples of non-technical documentation.听

Pre-trained: refers to machine learning systems that have been initialised by training on a large, general dataset, and can be fine-tuned to accomplish specific downstream tasks.听

Technical documentation: a written description of or guide to an AI system鈥檚 functionality. For instance, technical documentation content may include: usage instructions; technical assumptions about its use and operation; system architecture; and technical limitations. Manuals, code repositories and model cards are examples of technical documentation.听

Transparency: refers to the communication of appropriate information about an AI system to relevant people, in a way that they understand. In practice, making reporting procedures transparent requires clearly informing reporters about: how they can expect their report to be processed; how their report is processed; when their report has finished being processed; and any outcomes to which the report can be directly attributed.