ICO blogs on meaningfulness of human involvement in AI systems

The use of Artificial Intelligence (AI) is becoming ever more common in everyday tasks, making things faster and easier for businesses.  For example, AI systems can approve or reject a financial loan automatically, or they can assist recruitment teams in finding the best candidates to interview by ranking their job applications. But AI systems don’t always know best, despite the wealth of personal data they are able to process, so human input is essential. That’s why the GDPR has placed stricter conditions on AI systems that make solely-automated decisions, requiring that ‘meaningful’ human input forms a key part of AI use. 


It is this issue of ‘meaningful’ human input that the Information Commissioner’s Office (ICO) discusses in it’s latest blog, which is part of a series discussing the ICO’s work in developing a framework for auditing AI systems. 


Guidance on human input in AI systems 


In Article 22 of the GDPR, it states that ‘the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’  Whilst AI systems that only enhance or support human decision-making are not subject to these conditions, it’s not good enough to simply have a human ‘rubber-stamp’ decisions made by AI. Instead, human input into these decisions must be considered ‘meaningful’. 


Both the ICO and the European Data Protection Board (EDPB) have published guidance on this issue, with the main takeaway being that an AI system’s recommendation must be checked by a human reviewer. This reviewer must then consider all available data, weigh up its interpretation and any additional factors to be considered, and if necessary use their authority to challenge the recommendation. 


The risks of using complex AI systems 


In certain circumstances humans also need to consider the chance of additional risk factors that could cause a system to be ‘solely-automated’, i.e. without meaningful human input. Usually, the risks that appear in complex AI systems are either automation bias, and a lack of interpretability. So what are they, and how can these concerns be addressed? 


Automation bias happens when human users of AI systems stop using their own judgement and trust the computer-generated decision as being totally objective and accurate, because it is the result of complex mathematics. People often forget that computers are programmed by humans, and so they are not entirely free from error or bias. If human users do not question the AI’s result, the system risks becoming solely-automated. 


Automation bias can be reduced by creating design requirements that support meaningful human review. Those who develop AI systems should consider how best to integrate human review early on in the design process, and front-end interface developer need to anticipate how human reviewers think so they can give them a chance to intervene. In addition to this it can also be helpful to test different options with human reviewers early on in the build phase. 


A lack of interpretability also occurs when human reviewers stop challenging recommendations made by an AI system, but this time because of the difficulty in interpreting and understanding what the AI system is actually recommending. This would prevent a human being able to review the output meaningfully, and therefore the decision becomes solely-automated. 


Again, this is an issue that needs to be considered during the early design phase, and organisations need to define and explain how to measure the interpretability of their AI system. For example, an explanation of a specific output, rather than the model in general, or the use of a confidence score attached to each output could be used. This would flag up the need for further human input before a final decision is made. 


The main takeaway points


Regulations surrounding AI systems has become stricter since the introduction of the GDPR, and the ICO has already begun issuing advice on AI and decision making within organisations. First off, it suggests that your organisation decides from the outset if the application of its AI systems is either to enhance human decision making or to make soley-automated decisions. Board members or management will need to understand the risks of both before making any decisions, and make sure that any risk management policies are in place from the start. 


In addition to this, the ICO has also recommended that human reviewers of AI systems are given the necessary training to understand the mechanisms and limitations of AI, and how best to integrate their own expertise into the system. As well as this, human reviewers should be monitored in their acceptance and rejection of AI outputs, so that their approaches can be analysed and adjusted to avoid the risks outlined above.