OECD and a Recommendation for Trustworthy AI

May 26, 2019 | by Srivats Shankar
With 42 countries coming together to agree on a single standard for AI – it might just be a landmark in international law and AI

The rapid growth of AI (Artificial Intelligence) has caught many countries off balance. With the recent debate surrounding privacy, trust and accountability, with specific concerns arising from some of the largest technology corporations in the world, questions surrounding regulation of technology have undoubtedly seeped into the debate surrounding AI. Many countries have considered national policies that facilitate the growth of AI, while at the same time implementing principles that would either guide or bind developers of AI within their jurisdiction. Additionally, legislations that focus on specific types of AI, such as facial recognition, autonomous vehicles, and drones have quickly gained popularity.

However, May 22, 2019, might be a landmark regarding the development of AI and international law. At AI policy, one of the questions that we have taken upon to address is the cross-reference of technology, law and international policy. Without coordinated efforts from multiple states, some of the largest effects of AI may not be stemmed to the detriment of humankind. On this date, OECD proposed to the “Recommendation of the Council on AI”, a series of guidelines that are broadly divided into – principles on stewardship and trustworthiness, and national policies and international cooperation.

Though the agreement may not have a significant impact in bringing about a change in the usage of AI that could potentially affect livelihoods or human rights, as it is a non-binding recommendation by virtue of Article 5 (b) of the OECD Convention on Development 1961. However, it highlights the potential for discourse on AI and a future rooted in international law.

Section 1 of the Recommendation highlights the five principles identified by the OECD that would be necessary for ensuring trustworthy development of AI, as states being the stewards of AI. This echoes the idea of stewardship discussed relating to the internet in the 1990s, albeit with a greater focus on states. Broadly, the guidelines highlight a requirement for human centric AI, inclusive growth, transparency and explainability, security and safety, and finally accountability. Discussions along these lines echo similar principles discussed by the EU and Singapore. The idea of explainability is also not new, wherein the recitals of the General Data Protection Regulations have referred to it multiple times. It is of course met with a certain degree of resistance, since it would mean the impinging of the intellectual property of the developer of a system).

Section 2 focuses on how nations and their policies can facilitate this sense of trustworthiness. Rather than being resistive, they highlight investing, facilitating, enabling, training individuals, and facilitating international discourse on AI to enhance the existing platform of the recommendations.

With high-level discussions in multiple countries relating to AI, the set of recommendations proposed by the OECD are welcome. They highlight a sense of homogenization among some of the most prominent discussions relating to AI in the world. Even though there may not be binding effects from this recommendation, the mere process of signing the recommendation highlights the interest of nations to take part in this dialogue – with the potential to spearhead and direct its outcomes.