Information Commissioner’s Office in UK Provides Advice on Explainable AI

Jun 04, 2020 | Srivats Shankar

The Information Commissioner's Office in the UK along with the Alan Turing Institute prepares a set of best practices for explanation of AI

The Information Commissioner’s Office (ICO) of UK in collaboration with the Alan Turing Institute have developed guidance on how to explain the operation of AI that affects individuals. The ICO is a position dedicated to protecting the “information rights” in the public interest, with functions prescribed under the Data Protection Act of 2018 and the General Data Protection Regulation (GDPR).

The guidance is prepared in acknowledgment of the growing use of AI, addressing the best practices to assist with compliance relating to different laws that affect information rights. The guidelines were developed in pursuance of a paper written by Professor Dame Wendy Hall and Jérôme Pesenti, who noting the growth of AI within the UK suggested the development of a framework for regulating AI.

The guidance is divided into three parts, with the first addressing “explaining AI” as a general concept. It provides some useful definitions, including explaining what the guidance refers to as AI. Further, it specifies those laws to which the guidance has specifically been drafted in reference to including the GDPR. The guidance is developed with the recommendation to stakeholders who develop AI to prepare written explanations of the operation of AI, factoring in the following considerations – rationale, responsibility, data, fairness, safety, and impact.

The guidance also provides a practical guide on how to effectuate the principles and concepts that are provided in the first part of the guidance. This includes an understanding of how to translate the priorities of the developers into the domain, use case, and individual impact. It places and emphasis on usable and easy to understand explanations.

Finally, the report suggests a checklist for organizations to implement that identify positions within an organization that have the ability within the development pipeline to ensure compliance with the guidance. This includes the following steps:

  1. identifying people within the pipeline that have a responsibility to contribute towards explainable AI
  2. different people have the role of explaining the function of AI
  3. the business or individual applying the AI has the responsibility of explaining its function even if it is purchased from a third party

To supplement this the guidance provides an example of implementing this guidance to AI that assists in making cancer diagnosis. This is one of the first documents prepared by a state entity on explanation of AI, even though multiple organizations throughout the world have discussed AI with reference to its impact on the future work and human centric development.

Related

Latest

European Union Considering Establishing International Treaty on Artificial Intelligence

Srivats Shankar | Jul 26, 2020

The European Union Ad Hoc Committee on Artificial Intelligence is considering an international treaty for artificial intelligence in its upcoming session

Department of Transportation Releases Document for Future of Transportation Regulatory Approach

Srivats Shankar | Jul 26, 2020

The US Department of Transportation releases document for facilitating communication between regulators and developers of emerging transportation technology

Amica: Australia’s Solution for Dividing Assets

Srivats Shankar | Jul 23, 2020

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology

Events

13th International Conference on Agents and Artificial Intelligence – ICAART 2021

INSTICC | 2021-02-04 12:00:00

More Events