Opinion: Corporate AI Policies and Principles – Actually Making a Difference

May 22, 2019 | Srivats Shankar

Developing responsible AI is difficult – maybe it is time that big tech get serious about its own principles

The future of AI holds unparallel opportunity. From a technological standpoint it marks the culmination of years of development. At the same time, like all great inventions the potential for misuse remains high. It is unclear how, if at all, AI would be regulated. At the same time, there is an aggressive discussion taking place relating to morality, ethics, and legality of using AI in a given context. Individuals like Elon Musk and Bill Gates have argued for the introduction of legislation that specifically pertains to the regulation of AI. Rahul Matthan an advocate argues that there are unique problems to AI that existing laws cannot handle.

In the midst of all this, the corporations that are pioneering and pushing the technology beyond its limits have recently come under significant scrutiny. Largely due to existing practices relating to privacy and the propagation of information in an unethical manner has led to a trust deficit. Although, not all tech giants are built equal, those organizations that operate in the social media field and exclusively on the internet have been the target of significant discussion.

With the potential to revolutionize the field of AI, there have been calls for holding technology accountable and maintaining transparency. For example, recently Google chose to not move ahead with its Project Dragon, a search engine exclusively designed for China. The initial goal was to meet the requirements of the Chinese government, which played significantly towards censorship and opacity. However, due to criticism from civil rights groups and employees the project was canceled. This represents a case where people took an interest in calling out a potentially unethical practice. The problem is that waiting for reactive action to take place might not always work and at times may be obfuscated from public view. Recent, oppositions to companies taking part in developing AI for warfare has been received with a reinvigorated dialogue.

With the line between benefit and social cost as a result of AI narrowing, companies have taken an active role in defining their responsibilities with reference to AI and how they will choose to take part in the future of the technology. Usually, in the form of principles or organizations dedicated to researching the benefits of AI, the dialogue is changing aggressively. Just to name a few companies that have developed their own principles, Google, IBM, SAP, Facebook, Microsoft, and AT&T. Similarly, the organization Partnership on AI or FATE follow these lines. These are by no means exhaustive and there are hundreds of other companies that have their own AI policies.

The problem is what do they mean?

Research is good and it lays down the aspirations for what the technology could mean. The problem is the technology is almost here and there are no mechanisms to hold AI oriented technology accountable specifically. Existing legislations may be able to meet certain requirements, but only in limited cases.

Without any legislation governing AI the scope for applying these principles that each of these companies lay down remain limited. To that end there might be a solution for self-regulation. The possibility of developing a Memorandum for AI. Similar to a memorandum for a company, this would lay down the scope of how a company plans to use AI. They would provide a scope for their work and operate within that ambit. Like an objects cause, the Memorandum for AI can specify what the scope of operation of the AI system would be. Additionally, information on what type of AI would be developed, where it would be deployed, and for what purposes can clearly be enumerated. This can be coupled with information about the liability of different stakeholders associated with the entity developing the AI. It would ensure in the event of any damage the affected parties can have suitable recourse. Going above and beyond, the principles that have been consistently enumerated can be highlighted within this Memorandum. It would create a basis in law to challenge the actions of any entity. Under specified conditions, states can even limit specified activities. Having an insight into the activities of different entities operating AI can offer an opportunity to conduct important insights by a regulatory body that can verify sensitive areas in which certain companies may operate. This can of course be supplemented with an exhaustive set of articles. It would create a holistic basis to identify if AI is being implemented honestly and consistently.

Naturally, as a caveat any such regulation would require a procedural framework and specific criteria to which categories of developers this requirement would apply to. Differentiation based upon sector, financial capability, and market access could serve as guidelines. However, the success of any such endeavor would largely depend on how much it permits or limits innovation.

The difference between this and a contractual set of clauses is that a contract would operate between the two parties. The Memorandum would create a concrete basis for identifying and holding specific people accountable. It would not be limited to only two parties but allow for class-action lawsuits in cases of breach. Naturally, the locus to bring a matter before the court would depend on the development of a jurisprudential line and existing common law. However, it offers an interesting situation – give control of AI to the people developing it but ensure that the legal system works for all. A company that engages in technology that could be applied to warfare might need to enhance their liability and practical safeguards. At the same time, another entity that develops AI for the purpose of enhancing road safety might be bound by a completely different threshold. The different levels of operation can find a basis. Right now, there is no clear understanding of where and what AI might be applied to. Having access to information through a Memorandum offers a unique opportunity for a dialogue to develop based on information that is publicly available.

Filed Under

Administration

Opinion

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology