Google White Paper on Regulating AI

May 03, 2019 | Srivats Shankar

Google releases a white paper on regulating AI and identifies key concerns

Google recently laid out its outlook on AI and issues of governance. In a white paper published on governance and AI, they highlight five areas that require greater clarification for facilitating the opportunities of stakeholders. They highlighted that sector specific regulations already exist in a number of countries. However, they recommend that the areas highlighted in their report also be considered for an international framework, in order to mitigate a race to the bottom.

Broadly they highlight the following areas as being of particular concern with reference to AI:

  1. Explainable Standards
  2. Fairness Appraisal
  3. Safety Considerations
  4. Human AI Collaboration
  5. Liability Framework

According to them, explainable standards focus on understanding why a particular AI system operated in a certain way. By providing clear and concise information that takes place along best standards, that can be acted upon, improved, and enhanced, there may be a better chance for all stakeholders to benefit equitably from AI. However, they also propose certain alternatives, including, reporting of errors by AI systems, offering an opportunity of appeal for any decision by a system, perform ethical hacking, and conduct regular auditing.

The couple this with an understanding of what they call Fairness Appraisal, which essentially attempted to target unfair biases within a system. They noted that it is difficult to understand what fairness is, but by framing the same around existing diversity and equality laws, there are possibilities that bias can be tracked by integrating certain tests within the development work flow.

This is coupled with safety considerations. Naturally, this remains a somewhat subjective criteria. Depending on the type of AI, how safety can be guaranteed varies significantly. For example, a system that develops music based on random notes would have a significantly different safety thresholds than an autonomous vehicle. They highlight a number of criteria that need to be considered, including, whether the system is performing its objectives correctly, whether training has taken place in the real world, is the data set training the system valid, and is the training exhaustive. They recommend extensive automated training for the purpose of ensuring safety.

They also talk about the Human-AI collaboration, using the “human in the loop” standard. This is quite similar to the Singapore framework on regulating AI. They note that the different successes that can be perceived through any AI system and through a human vary significantly. They strongly recommend better collaboration between humans and AI, in order to focus on each of the strengths, provide flexibility, and ensure that there continue to remain greater opportunities for humans. They also highlight certain areas where the usage of AI should be considered as paramount to humans, particularly sensitive areas. These include areas that may affect someone’s life in a material manner, whether it affects a pre-existing benefit, whether the decision can be contested, and does it impinge on human rights.

Finally, with regard to a liability framework they opened by saying that legal personhood for AI is unnecessary, impractical, immoral, and allows for potential abuse. It essentially could create a situation where a new legal entity can come into existence, shielding a third party from liability. That is not desirable under any circumstance. Although, Google does not provide a definitive answer on this point, they do highlight that product liability, defective information, and strict liability can all be considered. However, it should not stifle innovation or access to these systems that would reasonably provide benefit to society.

On the whole the recommendations by the white paper are fairly apt. They highlight some of the most contested points of discussion relating to AI and legal systems. Although, much work remains to be done beyond these five points, within these considerations Google takes a pragmatic view. They do not excessively skew the information in their favor. Rather, it continues to remain balanced throughout and considers opposing views openly. Time will tell whether these questions will have a bearing on the future of policy and artificial intelligence.

Filed Under

AI Theory

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology