Jun 05, 2019 | Srivats Shankar
In what potentially appears to be an eyewash, especially from a country known for mass surveillance through AI, now comes a set of AI principles
Jun 05, 2019 | Srivats Shankar
In what potentially appears to be an eyewash, especially from a country known for mass surveillance through AI, now comes a set of AI principles
The Beijing Academy for Artificial Intelligence has released its set of AI Principles. As a governmental organization, the development of a set of AI principles seems unusual to say the least. It is formulated under the “Beijing Zhiyuan Action Plan”. It was officially released in November 2018. The goal of the action plan is to encourage and establish a culture of innovation in the field of AI within China. As part of this mandate, certain applications like policy measures, safety, and legal regulations are being contemplated. The goal of the initiative is to conduct these measures alongside the development of AI models for economic and social development.
The action plan was established by the Ministry of Science and Technology along with the Beijing Municipal Government. It is a developmental organization, which shares ties with many universities in China and corporations like Baidu and Xiaomi. Based on the information provided on their website (as translated), the goal is to promote innovation and disruptive breakthroughs. The aim to become leaders in the field of AI, by inculcating talent and innovation. They primarily have four key tasks – to promote innovation and establish high-level joint initiatives, promote and foster the development of disruptive achievements, gather high-level talent, and create an innovative ecosystem for AI.
Like all principles, it is unlikely that these would be binding in any capacity. However, they are interesting, given that they are developed and interpreted in the context of China and its role in seeking dominance in the field of AI.
The Principles, known as the “Beijing AI Principles”, open by stating that the development of AI concerns the future of society and the principles as stated are for the purpose of developing a shared human community and providing benefit to humankind.
The principles are divided into three sections.
The recommendation of the principles is that any R&D involving AI should be for the purpose of doing good, that is to progress human civilization and sustained the development. The research should be for serving humanity and human values of dignity, freedom, autonomy and human rights. The use of AI should be responsible and there should be safeguards to control risks. This is accompanied by a requirement to be ethical, diverse and inclusive, and to share findings of research.
What the ambit of these rights and ethics are remain questionable. For example, the recent Uighur crisis in China involving massive surveillance and suppression of indigenous culture and literature goes against these freedoms is generally understood by the international community. They would even largely go against the requirements of the UDHR. However, the activities of the state continue to remain largely unchecked. As a result, the broad-based understanding of these terms may largely manifest in no significant outcome for the state.
With regard to usage of AI, the Principles recommend three points – AI must be used wisely and properly, with informed consent, and there must be educational training. They essentially aborted the idea that users need to have the knowledge and understanding of the technology to avoid misuse and abuse of the technology. However, the vagueness of terms like “wisely” and “properly” create significant complications as to how this technology would be used.
With regard to governing AI, they make five suggestions for any governing authorities to regulate the operation of AI. This includes the optimization of employment, wherein the usage of AI should be inclusive, and it should coordinate with human activities. They further this by recommending cooperation to be developed in an interdisciplinary manner, while adapting to the requirements of the measures and requirements of governance, identifying the requirements in different fields and preparing for a long-term relationship with AI. Interestingly, they recommend that as part of the long probe planning there should be a risk analysis of usage of technologies like augmented intelligence, artificial general intelligence and super intelligence.
Some of these points, including the risk assessment and considerations for the future work are considered in light of the risks highlighted by multiple organizations.
Currently, the usage of AI within China is increasing rapidly. The international community has taken note of the usage of surveillance technology, including, facial recognition, individual tracking, and universal identification for the purpose of maintaining its control over individual activity. The active growth in the surveillance state as a result of AI cannot be underestimated. Many believe that the current cultural purge and linguistic unification across China is largely due to the technological superiority they have been developing over the last several decades.
The question now arises as to what these AI principles mean in this context. The usage of a “social credit” system adopted in China, which regulates the conduct of individuals – encouraging behavior such as exercise, while lowering scores for individuals who suffer from addiction or “excessive” videogame engagement. This fundamentally goes against individual autonomy and a small decrease in score makes it virtually impossible to purchase airplane tickets, affects credit scores and renders daily utility facilities increasingly difficult to utilize. Unlike the recent principles adopted by OECD on AI, there is a palpable tension on how if at all these would be affected in the daily lives of people. There is no doubt that the usage of AI by China needs to be subject to greater international scrutiny if individual rights are to be protected.
Srivats Shankar | May 02, 2022
The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations
Srivats Shankar | Mar 26, 2022
European Union reaches political agreement to introduce Digital Markets Act.
Maathangi Hariharan | Mar 22, 2021
/diːpfeɪk/
/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/
/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/