Guidance

Algorithmic Transparency Recording Standard - guidance for public sector bodies

Updated 8 May 2025

1. Summary

This guidance explains what the Algorithmic Transparency Recording Standard (ATRS) is, why it matters and how public sector organisations should use it. It includes section-by-section guidance for completing the ATRS 迟别尘辫濒补迟别.听听

2. ATRS purpose and scope

What is the ATRS and why does it matter?

The Algorithmic Transparency Recording Standard (ATRS) enables public sector organisations to publish information about the algorithmic tools they are using and why they are using them.

It consists of a template for organisations to fill in with key information about their algorithmic tools. This information is then published on the 伊人直播 repository in the form of an ATRS record.

By using the ATRS, public sector organisations can:

  • Drive public understanding and trust in their uses of algorithmic tools, including the boundaries of their use and their role in broader processes;
  • Enable senior responsible owners to take meaningful accountability for algorithmic tools and their outputs;
  • Share good practice and innovative use cases, and learn from peers;
  • Reduce administrative burden by proactively publishing information which may otherwise be raised through Freedom of Information (FOI) requests, parliamentary questions or similar;
  • Provide clarity to third party suppliers around the transparency requirements required by the public sector.

The ATRS is a core part of the government鈥檚 Blueprint for Modern Digital Government, in particular the promise to 鈥楥ommit to transparency, drive accountability鈥.

What is an algorithmic tool?

An algorithmic tool is a product, application, or device that supports or solves a specific problem using complex algorithms.听

We use 鈥榓lgorithmic tool鈥 as an intentionally broad term that covers different applications of artificial intelligence (AI), statistical modelling and complex algorithms. An algorithmic tool might often incorporate a number of different component models integrated as part of a broader digital tool.

How do I know if I should complete an ATRS record?

The ATRS is mandatory for certain organisations, and certain algorithmic tools within those organisations.

It is mandatory for all government departments, and for ALBs which deliver public or frontline services, or directly interact with the general public.

Within those organisations, the ATRS is mandatory for algorithmic tools which have a significant influence on a decision-making process with public effect, or directly interact with the general public.

This scope is designed to emphasise context, focusing on situations where a tool is influencing specific operational decisions about individuals, organisations or groups, not where a tool is an analytical model supporting broad government policymaking.听听 Further detail, including examples of algorithmic tools in and out of mandatory scope, can be found in the scope and exemptions policy听.

If your organisation is within the mandatory scope of the ATRS policy, it should have a single point of contact (SPOC) whose role is to coordinate with the ATRS team on identifying in-scope algorithmic tools, drafting and publishing records. You can email the ATRS team on algorithmic-transparency@dsit.gov.uk if you are unsure who your SPOC is.

However, the ATRS is recommended by the Data Standards Authority for use across the entire public sector and we have welcomed ATRS records from local government, police forces and other broader public sector organisations. If you are from such an organisation, you can complete an ATRS template and email it to algorithmic-transparency@dsit.gov.uk directly.

3. Preparing to use the ATRS

Assigning a lead

We recommend assigning a lead at your organisation to collate the relevant information from internal teams (and third-party providers, if applicable), to oversee the drafting and completion of the record, and to manage contact with the ATRS 迟别补尘.听

As outlined above, if your organisation falls within the mandatory scope of the ATRS, a SPOC will have been assigned. You should contact your SPOC before beginning work on an ATRS record. Email us on algorithmic-transparency@dsit.gov.uk if you are unsure who your SPOC is.

Approaching suppliers

If your supplier holds information that you need to complete a record, we encourage you to ask your commercial contact for the relevant details, explaining why you are asking for this information and why algorithmic transparency is important in the public sector. If your organisation and the tool is within mandatory scope of the ATRS policy, you should highlight this. If the supplier is reluctant to share some information with you based on concerns around potentially revealing intellectual property, it can help to walk the supplier through the questions asked in the template, explain how they are designed to provide only a high-level picture of the tool.

Understanding what information should and should not be published

The ATRS has been designed to minimise possibly security or intellectual property risks that could arise from publications.

The scope and exemptions policy, modelled on the FOI Act, provides a detailed framework for exempting information from individual ATRS records, or entire ATRS records from publication. In general, publishing an ATRS record and redacting certain fields with a brief explanation of why this has been done is preferable to not publishing an ATRS record at all, particularly when partial information about the algorithmic tool is already in the public domain.

Considerations for limiting the information in certain fields include:

  • Operational effectiveness and gaming. For example, providing information which might enable an individual to modify their behaviour to avoid triggering a warning during an application process. Such issues can usually be managed by being careful about the level of detail provided in the ATRS record, especially around the technical design or data used. Wider information, for example on how the algorithmic tool is used in the overall decision-making process may still be safe to release and relevant.
  • Cybersecurity risks. For example, providing system architecture to a level of detail which might increase the risk of a cyberattack. Such issues can usually be managed by consulting the appropriate individuals and teams (both within your organisation, and from any relevant third-party suppliers) during the drafting of the ATRS record. Broadly speaking, obscurity is a weak cybersecurity defence, and if a tool is deployed in a way where the information defined in the ATRS presents a cybersecurity risk then it is highly likely that there are vulnerabilities that need addressing regardless of levels of transparency.
  • Intellectual property risks. Suppliers may raise concerns that providing information for an ATRS record infringes on their intellectual property. We have designed and tested the ATRS to only require 听information at a general level that should not present such risks to intellectual property. However, if you or your supplier are concerned, it may be worth checking relevant legal or commercial agreements and involving appropriate specialists.

4. Completing the ATRS template

Downloading the template

The ATRS template is available in two formats: an Excel version and a Google Sheets version for browser. Both can be downloaded here. Please do not alter or change the format of the template as this may affect our ability to process and publish your ATRS record.

The ATRS template is divided into 2 tiers.

  • Tier 1 is aimed at the general public, and as such should be clear and simple in language.
  • Tier 2, whilst still accessible to the general public, is aimed at specialist audiences such as civil society, journalists, academic researchers and other public sector organisations wishing to learn from their peers.

  • Tier 1 comprises a single sheet within the Excel Workbook/ Google Sheets document, called Summary Information.
  • Tier 2 is split across eight further sheets.

Principles for completing the template

The ATRS aims to deliver meaningful transparency around public sector uses of algorithmic tools. This means not just acknowledging the existence of such tools, but providing an intelligible explanation of how and why they are being used. You should aim to complete the ATRS template in full sentences, in clear and simple language. You may consider sharing the draft record with teams who are not connected to the algorithmic tool to check for understandability.

For examples of existing ATRS records which may help you complete the template, consult the repository. Fictional examples are also included in the guidance below.

5. Summary Information (Tier 1)

Tier 1 asks for basic, high-level information about the algorithmic tool aimed at a general audience without technical knowledge. All fields should be completed.

What name should I give my algorithmic tool?

The tool name will also appear in the title of your ATRS record, and will help people navigate the ATRS repository. It should be clear, concise and consistent throughout.

How much detail should I provide in the description?

Your description should be brief and clear, focusing on what the tool is and why it is being used (rather than technical detail of how it works, which comes later in the record). Remember that the ATRS aims to show the public when and why algorithmic tools are being used in processes that affect them. Ideally the description should be no more than two or three sentences.

What website and email address should I provide?

Not all algorithmic tools will have a relevant website. If providing one, please ensure it is live and publicly accessible 鈥 otherwise, enter 鈥N/A鈥. The email address you provide should be that of the team responsible for the tool, not an individual, for business continuity and security purposes. When an individual leaves the organisation but the wider team remains, the email address will still be up to date.

Fictional example:

1.2 - Description

LeaseSure AI uses machine learning to analyse council housing rent accounts and create a prioritised caseload of rent arrears for housing officers. The tool is designed to work alongside existing housing management systems within the council to help improve arrears management.

6. Owner and responsibility (Tier 2)

This section focuses on accountability for the development and deployment of the tool. All fields should be completed, with 鈥N/A鈥 where necessary.

I鈥檓 not sure who the senior responsible owner (SRO) is for my algorithmic tool. What should I put in this field?

The SRO should be a role title, not a named individual, for business continuity and security purposes. It should be the role which is ultimately accountable for the tool in an operational context. This may be the policy or service owner, for example.

What counts as a third party?

Third parties include commercial suppliers and other public sector organisations who may, for example, have developed an in-house algorithmic tool which they are sharing with your organisation.

Many external suppliers have been involved in the delivery of the tool through a multi-layered supply chain. What information should I provide about this in the various third parties fields?

A procured tool can involve multiple companies at different places in the supply chain. For instance, a public body could procure a tool from a company, which in turn procured the model and data from another company before integrating the model into a customisable tool.

Ideally, you should describe those different supplier relationships as clearly and concisely as possible, detailing which organisation was or is responsible for which part of the final tool that you are deploying.

Fictional example:

2.1.4. Third party involvement

Yes

2.1.4.1. Third party

Sulentra Dynamics Ltd.

2.1.4.2. Companies House Number

813004659779齿听

2.1.4.3. Third party role

Sulentra Dynamic has provided LeaseSure AI for a six-month pilot.

2.1.4.4. Procurement procedure type

Proof-of-concept pilot (a formal procurement process will follow if the tool demonstrates measurable benefit after the trial period).

2.1.4.5. Third party data access terms

Sulentra Dynamics Ltd. has been provided with controlled, read-only access to renting accounts data in the council鈥檚 MundioTenancy platform, but it does not integrate with other systems. This has been done in compliance with data protection legislation and all Sulentra Dynamics Ltd. staff with access to the data have been subject to appropriate vetting checks. Access to the data is only granted for the limited period of time while the tool is developed.

7. Description and Rationale (Tier 2)

This section expands on the high-level description given in Tier 1, with more granular detail about the algorithmic tool, its scope and the justification for its use.

How should the detailed description be different to the high-level description in Tier 1?

In contrast to the basic description of the tool in Tier 1, which focuses on what the tool is and why it is being used, the Tier 2 detailed description here aims to explain how the algorithmic tool works. As such, you should describe the tool鈥檚 purpose, its intended users, key aspects and functions at a more granular level. You should also include the tool鈥檚 scope, as well as limitations or context where it does not apply.

Whilst the amount of information provided here will vary between algorithmic tools, we typically expect a paragraph or two of text.

Fictional example

2.2.1. Detailed description

LeaseSure AI monitors tenant payment patterns and predicts financial distress using models like Logistic Regression and Random Forest Classifier on historical rent data.

It utilises the Logistic Regression (LR) model first to analyse changes in rent payment patterns (e.g. type and date of payment) and predicts the probability of falling into arrears. The LR model produces a list of 鈥榓t risk鈥 accounts, which is then analysed further by the Random Forest Classifier (RFC) model. Based on features such as payment trends and arrears duration (e.g. 鈥30-Day Arrear鈥, 鈥60-Day Arrear鈥, 鈥90-Day Arrear鈥, etc.), it classifies accounts into 鈥楲ow鈥, 鈥楳edium鈥, and 鈥楬igh鈥 risk. The output of the tool is a weekly prioritised caseload that is integrated into the council鈥檚 existing MundioTenancy platform for housing officers to review and action.

How much detail should I provide in the benefits field?

You may choose to provide a list of individual benefits or a few sentences of description. Where possible, try to explain how and why the tool should deliver the benefit. For example, rather than just stating 鈥榠mproved customer experience鈥, explain how and why the tool should achieve this.

What is expected in the previous process field?

Your algorithmic tool may have replaced a legacy tool or a manual process. In either case, you should provide a brief description of what it replaced. If your algorithmic tool is part of a brand-new process (for example, delivering a programme which did not previously exist), you should make this clear.

What if we did not consider any alternatives?

Briefly explain why no alternatives were considered. For example, you may be using an algorithmic tool provided by a central government department for others to use.

8. Deployment Context (Tier 2)

This section should help people understand how the algorithmic tool ultimately helps to deliver an operational process or service, and how humans are involved in this delivery.

How much detail should I provide about the broader operational process?

We typically expect a paragraph or two of text. It can be helpful to frame the answer around the output that the tool produces and how this is then used, for example to determine the outcome of an application process, or to deliver a public service. You should aim to make clear the degree of automation that the tool delivers within the broader process.

Fictional example:

2.3.1. Integration into broader operational process

LeaseSure AI does not automate decisions. Instead, it provides a recommended list of priority cases, which are categorised as 鈥楲ow鈥, 鈥楳edium鈥 or 鈥楬igh鈥, based on their risk of falling into payment arrears. Each case on the list includes reasons, such as 鈥楨scalating 60-Day Arrears鈥, 鈥楶ayment Arrangement Broken鈥, or 鈥楤enefit Reduction Detected鈥, that then help housing officers interpret the tool鈥檚 outputs effectively. Housing officers review the list weekly and record any actions taken (e.g. sending notifications of payment arrears to tenants) directly in the council鈥檚 tenancy management platform, MundioTenancy.

What information is relevant for the appeals and review field?

You should consider both the outputs of the algorithmic tool itself and whether they can be challenged or appealed, and the outputs of the broader operational process and whether they can be challenged or appealed. This may involve providing a link to a public appeal or contact form.

If no appeals or review process is necessary or relevant for your tool, include a short sentence explaining why you are not completing this section.

You should also be aware of Article 22 UK GDPR which states that 鈥楾he data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her鈥. If your algorithmic tool falls within the scope of these provisions, you must complete this field. Further information can be found via the ICO鈥檚 guidance on 鈥樷.

9. Tool Specification (Tier 2)

This section should detail the technical specifications of the algorithmic tool. As outlined above, the level of detail here should not infringe on a supplier鈥檚 intellectual property rights, or generate cybersecurity risks.

What is meant by system architecture?

You should broadly describe how your tool is organised and provide context for the rest of the technical section. System representations such as AWS diagrams are ideal for conveying this type of information in a concise way 鈥 they capture the primary components of your technology stack and how they interact. You should think about the end-to-end process by which your tool processes inputs, the digital services and resources that it uses and the environments in which system processes occur. Any models that you consider later should be mentioned in this field.

You can see a helpful example of the diagram of system architecture provided by the Department for Health and Social Care in their algorithmic transparency report for the QCovid tool here.

What is meant by system-level input and output?

For tools that consist of multiple machine learning models, this will be the primary input into or output from the system as a whole. For tools that consist of only one machine learning model, the system-level input and output, and the model input and output, may be the same. These fields should include the expected formats and data types.

Fictional example:

2.4.1.2. System-level input

Both historical and near real-time structured, tabular tenancy-related and financial data such as rent transaction history, payment type and date history, payment due dates, broken promise amounts, rent status, etc.

2.4.1.3. System-level output

The tool鈥檚 output is a prioritised caseload of accounts that are ranked by risk of non-payment 鈥 i.e. 鈥楬igh鈥, 鈥楳edium鈥 and 鈥楲ow鈥. The output list is delivered to, and integrated with, the council鈥檚 interactive MundioTenancy platform in the form of an interactive caseload, with the option to export it in CSV or Excel file formats for reporting and audit purposes.

10. Model Specification

This section should detail the model or models used within the algorithmic tool. Should your tool consist of more than one model, please duplicate this sheet and complete a separate sheet for each individual model.

N.B. For off-the-shelf models that your organisation has not trained, validated, tested, or applied any refinement techniques to (e.g. Web UI-based LLMs), please leave both the Model Specification (2.4.2) and following Development Data Specification (2.4.3) sections blank and move straight to the Operational Data Specification (2.4.4) section instead.听听

For tools that consist of more than one model, please make and complete copies of the Model Specification (2.4.2) section in the template for each model.

What level of detail should I provide in this section?

As a minimum, the fields in this section should include the type of model. If using a pre-trained model, please also specify the name of the API provider, where applicable, or mention if it is 鈥榮elf-hosted鈥.

Fictional Example for one of the models in the tool

2.4.2.1. Model name

Using Logistic Regression from the scikit-learn library in Python, which has pre-defined parameters. 听听听听

How is model architecture different to system architecture?

Whereas the system architecture refers to how the model is integrated into the broader technical architecture, while model architecture describes the internal structure of the model 鈥 i.e. how it works or how it transforms an input into an output. At a minimum, you should enter the type of model used (e.g. Logistic Regression, Decision tree, Random Forest Classifier, Convolutional Neural Network, Rule-Based System, etc.). If the model has been designed such that certain features or inputs are given more priority over others, and where this has significant bearing on the model鈥檚 output, then indicate what those features are. For rule-based systems, describe how the rules are structured and indicate if any rules are weighted or prioritised over others. You may also provide a publicly accessible link to further resources. For security, do not include details of the network architecture to which the tool is connected. 听听If it aids understanding of the model, you are also encouraged to provide further details or provide a link to publicly available resources that offer further information.

Fictional example 1:

2.4.2.6. Model architecture

Using Logistic Regression from scikit-learn library in Python, which has pre-defined parameters.

Fictional example 2:

2.4.2.6. Model architecture

NV-Administration is an optimisation-based automated planning model. The model consists of:听

  1. A set of rules that dictate how a fixed number of desk spaces are distributed across an office based on relevant variables.
  2. An ordered set of objectives that specify the goal conditions for allocation. These include:
  • Maximisation of preferred desk choices
  • Minimisation of desk assignment per team
  • Maximisation of desk assignment per directorate and group

What kind of metrics am I expected to detail in the model performance field?

Performance metrics will differ based on what type of method and tool you are developing or deploying. Useful metrics to consider may include accuracy metrics such as precision, recall or F1 scores, metrics related to privacy, and metrics related to computational efficiency.

You should also describe any bias and fairness evaluation you have undertaken (i.e. model performance over subgroups within the dataset), and any measures taken to address issues you identify.听

For more information about setting performance metrics, you may find this 伊人直播 Service Manual guidance helpful.

How can I find more information about how to identify and mitigate bias in the data, model and output of my algorithmic tool?

Useful resources may include the government鈥檚 AI Playbook and the former Centre for Data Ethics and Innovation鈥檚 Review into bias in algorithmic decision-making, especially chapters 1 and 2.

For more information about bias in algorithmic decision-making, see the RTAU (formerly Centre for Data Ethics and Innovation) review into bias in algorithmic decision-making, especially Chapters 1 and 2. For more information about how to mitigate bias in algorithmic decision making, you may find it helpful to review the RTAU鈥檚 repository of bias mitigation techniques which can be found here.

11. Development Data Specification

This section aims to expand on 鈥2.4.2.8 Datasets and their purposes鈥 in the Model Specification section. It focuses on the data used to train, validate or test your model(s).

What if I used an off-the-shelf model?

Provided you have not trained, validated, tested or applied any refinement techniques to an off-the-shelf model (e.g. Web UI-based LLMs), you should leave the Development Data Specification section blank and move straight to the (Operational Data Specification) section.

What level of detail should I provide for the development data description?

The aim of this field is to describe all of the datasets used for developing the tool as a whole. Where possible, please provide publicly accessible links to these datasets. (This differs from the datasets and their purposes field in the Model Specification, which simply asks for a list and specification of what each dataset was used for).

Why do you include a data quantities field?

The purpose of the 鈥榙ata quantities鈥 field is to sense-check proportionality of data in relation to the model task and complexity. Where a learning algorithm is applied to data, small datasets with few samples are more likely to yield underfitting models, while large datasets with numerous attributes may cause overfitting. In addition, too few samples may indicate insufficient representation of a target population, and too many attributes may indicate increased data security risks (such as de-identification).

What are sensitive attributes?

While we don鈥檛 prescribe a specific definition of 鈥榮ensitive鈥, we encourage you to consider:

  • Personal data attributes: 鈥渁ny information which are related to an identified or identified natural persons鈥 as defined by .
  • Protected characteristics: any characteristics it is illegal to discriminate against as defined in the .
  • Proxy variables: any information that may be closely correlated to unobserved personal attributes or protected characteristics. E.g. frequency of certain words in an application may correlate to gender, birthplace may correlate to race.

In certain cases, it might not be feasible to disclose all the sensitive attributes in the data. At a minimum, you should disclose the fact that you are processing sensitive data and add as much detail as appropriate.

I鈥檓 concerned that sharing information about the variables and potential proxies could lead to individuals being made identifiable. What should I do?

It is unlikely that the ATRS record will lead to individuals being made identifiable as you are only being asked to provide a general description of the types of variables being used. If you are considering making the dataset you are using openly accessible and linking to it, you should comply with the relevant data protection legislation to prevent individuals from being made identifiable from the dataset.

This should also be considered as part of a Data Protection Impact Assessment (DPIA). For further guidance on completing DPIAs, please refer to the ICO鈥檚 .

What other resources are available to support me with completing this section?听

You may also find it helpful to consult the .

12. Operational Data Specification

This section focuses on the data used or produced in your algorithmic tool鈥檚 real-world operation, such as user inputs, retrieved documents, system-generated logs and other data generated during use.

What are sensitive attributes?

See above.

13. Risks, Mitigations and Impact Assessments

This section should provide information on impact assessments conducted, identified risks, and mitigation efforts.

No, there is no need to provide a summary if you are providing an openly accessible link to the full assessment.

What do you mean by risks and what are the most common risks you would expect to see described here?

Categories of risk likely to be relevant include:

  • Risks relating to the data
  • Risks relating to the application and use of the tool
  • Risks relating to the algorithm, model or tool efficacy
  • Risks relating to the outputs and decisions
  • Organisational and corporate risks
  • Risks relating to public engagement

This list is not exhaustive and there may be additional categories of risk that are helpful to include.

What other resources are available to support me with completing this section?

The Government Finance Function鈥檚 Orange Book provides further guidance on conducting risk assessments for public sector projects. You may also find it helpful to consult the .

14. Publishing your ATRS record

Review and feedback

Email your completed ATRS template to algorithmic-transparency@dsit.gov.uk (or send to your SPOC, if your organisation has one). The ATRS team will check for readability and provide feedback or suggested amendments if necessary.

Public scrutiny and communications

Before finalising your record, you should consider the possibility that publishing information on your algorithmic tool may invite attention and scrutiny from the public and media. This is particularly true for more high-profile use cases and where the use of an algorithmic tool has not been publicly disclosed before.

You can help to mitigate these risks by ensuring you provide clear information and an appropriate level of detail in your record. You should also ensure that your organisation鈥檚 communications team or press office is aware of the plan to publish, has reviewed the record and has prepared to respond to media requests if deemed necessary. If your organisation has a SPOC, you should ask them to coordinate this.

You may wish to consider publishing supplementary information, for example a blog post explaining what the algorithmic tool is, or a link to the ATRS record on the relevant service or policy pages on your website.

Obtaining clearance

The ATRS team requires written confirmation that your ATRS record has gone through all appropriate internal signoff procedures before publishing it to the 伊人直播 repository. At a minimum, this should include clearance by:

  • The team responsible for deploying/ operating the algorithmic tool
  • The SRO for the tool (ideally the SRO listed in the Owner section)
  • The communications/ press team

In certain high-profile instances it may be appropriate to seek ministerial clearance.

Updating your ATRS record

Should substantive details change in relation to your ATRS tool, you should update the ATRS template, go through internal clearance again, and send the updated template to algorithmic-transparency@dsit.gov.uk asking for your record to be updated accordingly. Substantive changes might include a pilot tool moving to production, new datasets being used to train or refine the tool, or a change to the broader operational process of which the tool is part.

Should you decommission an algorithmic tool for which you have published an ATRS record, contact the team on algorithmic-transparency@dsit.gov.uk.