OECD and a Recommendation for Trustworthy AI

May 26, 2019 | Srivats Shankar

With 42 countries coming together to agree on a single standard for AI – it might just be a landmark in international law and AI

The rapid growth of AI (Artificial Intelligence) has caught many countries off balance. With the recent debate surrounding privacy, trust and accountability, with specific concerns arising from some of the largest technology corporations in the world, questions surrounding regulation of technology have undoubtedly seeped into the debate surrounding AI. Many countries have considered national policies that facilitate the growth of AI, while at the same time implementing principles that would either guide or bind developers of AI within their jurisdiction. Additionally, legislations that focus on specific types of AI, such as facial recognition, autonomous vehicles, and drones have quickly gained popularity.

However, May 22, 2019, might be a landmark regarding the development of AI and international law. At AI policy, one of the questions that we have taken upon to address is the cross-reference of technology, law and international policy. Without coordinated efforts from multiple states, some of the largest effects of AI may not be stemmed to the detriment of humankind. On this date, OECD proposed to the “Recommendation of the Council on AI”, a series of guidelines that are broadly divided into – principles on stewardship and trustworthiness, and national policies and international cooperation.

Though the agreement may not have a significant impact in bringing about a change in the usage of AI that could potentially affect livelihoods or human rights, as it is a non-binding recommendation by virtue of Article 5 (b) of the OECD Convention on Development 1961. However, it highlights the potential for discourse on AI and a future rooted in international law.

Section 1 of the Recommendation highlights the five principles identified by the OECD that would be necessary for ensuring trustworthy development of AI, as states being the stewards of AI. This echoes the idea of stewardship discussed relating to the internet in the 1990s, albeit with a greater focus on states. Broadly, the guidelines highlight a requirement for human centric AI, inclusive growth, transparency and explainability, security and safety, and finally accountability. Discussions along these lines echo similar principles discussed by the EU and Singapore. The idea of explainability is also not new, wherein the recitals of the General Data Protection Regulations have referred to it multiple times. It is of course met with a certain degree of resistance, since it would mean the impinging of the intellectual property of the developer of a system).

Section 2 focuses on how nations and their policies can facilitate this sense of trustworthiness. Rather than being resistive, they highlight investing, facilitating, enabling, training individuals, and facilitating international discourse on AI to enhance the existing platform of the recommendations.

With high-level discussions in multiple countries relating to AI, the set of recommendations proposed by the OECD are welcome. They highlight a sense of homogenization among some of the most prominent discussions relating to AI in the world. Even though there may not be binding effects from this recommendation, the mere process of signing the recommendation highlights the interest of nations to take part in this dialogue – with the potential to spearhead and direct its outcomes.

Filed Under

International Matters

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology