Information Commissioner’s Office in UK Provides Advice on Explainable AI

Jun 04, 2020 | Srivats Shankar

The Information Commissioner's Office in the UK along with the Alan Turing Institute prepares a set of best practices for explanation of AI

The Information Commissioner’s Office (ICO) of UK in collaboration with the Alan Turing Institute have developed guidance on how to explain the operation of AI that affects individuals. The ICO is a position dedicated to protecting the “information rights” in the public interest, with functions prescribed under the Data Protection Act of 2018 and the General Data Protection Regulation (GDPR).

The guidance is prepared in acknowledgment of the growing use of AI, addressing the best practices to assist with compliance relating to different laws that affect information rights. The guidelines were developed in pursuance of a paper written by Professor Dame Wendy Hall and Jérôme Pesenti, who noting the growth of AI within the UK suggested the development of a framework for regulating AI.

The guidance is divided into three parts, with the first addressing “explaining AI” as a general concept. It provides some useful definitions, including explaining what the guidance refers to as AI. Further, it specifies those laws to which the guidance has specifically been drafted in reference to including the GDPR. The guidance is developed with the recommendation to stakeholders who develop AI to prepare written explanations of the operation of AI, factoring in the following considerations – rationale, responsibility, data, fairness, safety, and impact.

The guidance also provides a practical guide on how to effectuate the principles and concepts that are provided in the first part of the guidance. This includes an understanding of how to translate the priorities of the developers into the domain, use case, and individual impact. It places and emphasis on usable and easy to understand explanations.

Finally, the report suggests a checklist for organizations to implement that identify positions within an organization that have the ability within the development pipeline to ensure compliance with the guidance. This includes the following steps:

  1. identifying people within the pipeline that have a responsibility to contribute towards explainable AI
  2. different people have the role of explaining the function of AI
  3. the business or individual applying the AI has the responsibility of explaining its function even if it is purchased from a third party

To supplement this the guidance provides an example of implementing this guidance to AI that assists in making cancer diagnosis. This is one of the first documents prepared by a state entity on explanation of AI, even though multiple organizations throughout the world have discussed AI with reference to its impact on the future work and human centric development.

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology