AI is advancing fast and is being adopted in numerous everyday contexts, such as recruitment, fraud detection and employee monitoring. While the UK lacks any specific AI-focused regulation, the use of personal data in AI systems is subject to existing rules under the UK General Data Protection Regulation (GDPR).

The UK Information Commissioner’s Office (ICO) released some guidance on using personal data in AI last week. While seemingly aimed at other regulators, the guidance will be useful to help any organisation using AI systems to avoid breaching data protection law.

The document is a helpful primer on using AI in a compliant way and addresses common AI issues such as bias, privacy and rights over automated decision-making.

Information Commissioner’s Office Releases Guidance on AI- Reducing Bias, Ensuring Privacy and Facilitating Rights Over Automated Decision-Making

8 Tips for Handling AI and Personal Data

The ICO’s guidance begins with eight tips for handling AI and personal data in a way that is compatible with the UK GDPR.

Part of the work of using AI in a compliant way is obeying the GDPR’s principles of data processing. 

These principles include “data minimisation”, which means only collecting the minimum personal data required for a specific purpose, and “lawfulness, fairness and transparency”, which means providing information about your use of personal data and using personal data in a lawful way that people would reasonably expect.

The GDPR also provides rules about “automated decision-making with legal or similarly significant effect”.

This means that certain important decisions about people, in areas such as credit or employment, must not be made only using AI—you must involve a human in the decision-making process.

Below is a summary of the eight tips provided by the ICO to help you comply with these and other rules. The ICO’s document goes into more detail about each of these tips.

  1. Take a risk-based approach when developing and deploying AI 

Always assess whether you need to use a high-risk technology such as AI. Make thorough risk checks and take appropriate mitigation steps, such as data protection impact assessments (DPIAs) and consultations with groups that may be affected. 

  1. Think carefully about how you can explain the decisions made by your AI system to individuals affected 

Be open and honest about how and why people’s data will be used. Consider how to justify and present your explanations in plain language, and assess the potential impacts of decisions made by your AI system.

  1. Collect only the data you need to develop your AI system and no more

Ensure all data in your AI system comply with GDPR principles such as accuracy and data minimisation, and think carefully about which privacy-supporting technologies best suit your needs.

  1.  Address risks of bias and discrimination at an early stage

To mitigate bias, fully assess the accuracy, reliability and relevance of data and make sure they are up to date. Establish the consequences of decisions made by the AI system, and assess risk acceptability.

  1. Take time and dedicate resources to preparing the data appropriately

To optimise the quality of your AI system, ensure due diligence regarding labelling data and have clear lines of accountability in place. 

  1. Ensure that your AI system is secure

Establish the level of security you need to mitigate risk through inventory assessments, model de-bugging and regular security audits.

  1. Ensure that any human review of decisions made by AI is meaningful

Human reviewers of AI decisions should be properly trained and able to override automated decisions where necessary.

  1. Work with the external supplier to ensure your use of AI will be appropriate

When using a third-party AI system, you are likely to be responsible for ensuring it complies with the law.

AI and personal information: FAQs

The second part of the ICO’s document comprises a list of frequently asked questions. The ICO says it will add to these over time. Here’s an outline of some of the questions and answers.

  1. If we plan to use AI, do we have to carry out a data protection impact assessment (DPIA)?

Most AI use cases involve the processing of sensitive data. While assessments are case-by-case, it is likely that you will have to carry out a DPIA when using AI.

  1. Do the outputs of an AI system have to comply with the accuracy principle under data protection law?

The accuracy principle applies to all personal data, whether it is information put into an AI system or an output of the system. 

AI systems do not have to be 100% accurate to comply with this principle, but users should factor in the possibility of inaccuracy and the impact this may have on decisions made by the AI.

  1. What steps can we take to avoid bias and discrimination in our use of AI?

Steps to avoid bias and discrimination depend on the causes, whether this concerns how variables are measured, labelled, or aggregated. Criteria and accountability lines should be established for data labelling which should be easy to understand. Use multiple human labellers for consistency.

  1. How can we comply with the data minimisation principle when developing an AI system?

Only process as much personal data as you need for your AI purpose. Understanding what is “adequate, relevant and limited to what is necessary” will be case-specific. Map areas of the AI system that need personal data and review needs periodically.

  1. What counts as a solely automated decision with legal or similarly significant effects?

Solely automated decisions are illegal under UK GDPR. A legal effect impacts upon someone’s legal rights. While more difficult to define, similarly significant effects include automatic refusal of a credit card, or e-recruiting practices without human input.  

  1. Is using AI illegal?

Data protection regulation does not prohibit AI. Rather, data protection law recommends a risk-based approach to assess the risks your use of AI poses to people’s rights and freedoms. 

  1. Do we need people’s permission to analyse their data with AI?

Whether you use AI or not, you need a lawful basis to process personal data. The most appropriate basis in any given situation will depend on your circumstances. 

The GDPR provides six lawful bases for processing personal data. One of these lawful bases involves asking permission—”consent”. Sometimes, the most appropriate lawful basis for using AI might be consent. In other circumstances, another lawful basis will be more appropriate. 

You will also need to inform individuals about how you are collecting and using their personal data using plain language. There are limited exceptions to this rule.

  1. Do we need to publish an AI policy?

Data protection law does not explicitly regulate AI, therefore you are not required to publish an AI policy. However, you should address AI in your privacy notice, where appropriate.

  1. Do we need to understand how a third-party supplied AI system works?

When procuring a third-party-supplied AI system, be clear about who is the controller, joint controller, or processor. Controllers will need to understand how the AI system works, to ensure it complies with data protection principles.

If the AI system falls under joint control, all parties are responsible for compliance.