The EU Guidelines on Trustworthy AI: Another Voluntary Framework

May 05, 2019 | Srivats Shankar

The EU has presented a set of guidelines for trustworthy AI

In April, a high-level committee presented a framework for developing trustworthy AI. As a voluntary guidelines, like many other states have developed, it highlighted some of the most pressing concerns that can be identified throughout with regard to AI. However, the lack of specificity renders them somewhat ineffective with regard to developing actionable legislative policy around the same. As internal mechanisms go, it does offer insight into how different legal persons can actively take part in ensuring that their AI system remains accountable and ethical.

The framework essentially embodies three principles:

  1. lawful
  2. ethical
  3. robust

To achieve these ends they recommend methods through which trustworthiness can be established through a constant process of verification. They highlighted the potential for rubbing edges with justice, equality, and individual rights. They highlighted that there should be respect for humans and AI should not be a mechanism to allow for discrimination to creep in within society. The guidelines acknowledge that different principles may conflict with one another and that balancing them may be difficult. However, the broad goal of this part remains the balancing of human rights with the benefits of AI.

With regard to the requirements of an AI system, they highlight seven principles that are absolutely necessary for ensuring trustworthiness. Human agency, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental well-being, and accountability. Interestingly, here they do not only discuss each of these points in detail, but rather highlight how technical standards can be adopted to ensure that these principles continue to be implemented continuously, through, testing, explanation, quality assurance, and service indicators. This company with non-technical methods, including, regulation and codes of conduct have been cited as mechanisms to ensure that these principles are carried out.

This is preceded by a discussion on how the trustworthiness can be assessed, not only by external governance methods, but also through internal mechanisms that can be adopted within individual corporations. The most interesting and potentially valuable contribution of the guidelines, is the assessment list, which lays down a set of points that can be adopted to verify whether the AI system operates in a trustworthy manner, by highlighting selected points. These include the protection of fundamental rights, human agency, human oversight, among several considerations. It forms a concise information flow, which can be adopted within the work flow of any entity to verify the effectiveness of a particular technology in protecting rights.

This broadly touches on all the points highlighted by the guidelines. As a set of voluntary guidelines, the efficacy of such guidelines may never truly come to light. However, it does start an important policy dialogue that currently is in its infancy. As has already been stated in the offset, the purpose of the guidelines is mainly to protect individual rights and remain human centric, which the guidelines have done to an extent. Despite more required on part of states, the conflicting of policies and existing developments remains an uphill battle. Without clarity on how AI will develop developing targeted policies remains a challenge, one which may be rendered redundant in just a few years. The potential for guidelines to become widely adopted currently remains a possible mechanism for widescale adoption of AI, to ensure a minimal degree of accountability.

Filed Under

Humanity and AI

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology