Singapore recently became one of the first countries to propose a Model AI (Artificial Intelligence) governance framework. They had originally presented the framework during the World Economic Forum Annual Meet. It was subsequently open for comment by the Personal Data Protection Commission of Singapore. As a framework, it offers broad points on what governing AI would potentially look like. As a framework it stands as a steppingstone to the larger discourse on regulating AI. However, there are a few points that lack suitable clarity and contour that can potentially be elaborated upon.
The framework opens by laying down its objective as a voluntary framework, which can be adopted by entities working on developing AI. The framework accepts that the nature of AI can vary significantly based on its use and impact on people. Therefore, it states that whoever develops AI must make a decision as to whether, if at all, the AI would have a significant impact on the daily lives of people. Accordingly, a decision should be made to regulate the technology to the best of the abilities that may be necessary for ensuring that the usage of AI has a technology is responsible and held accountable. They recommend that there should be a concentrated effort to work towards guidelines like the OECD Privacy Principles.
It offers an understanding as to why this framework has come to existence and argues that there should be explainable, transparent and fair usage of AI. This stems from mainstream discourse around technology that has dominated headlines for the last several months. Whether the technology should be explainable and transparency is largely in conflict with the idea of protecting the intellectual developments of a particular entity. There was a similar discourse on this point with the development of the General Data Protection Regulations, with the introduction of the “right to explanation”. There was an initial discussion to include this right as part of an enforceable principle within the regulation. That would mean that if there is an outcome where the technology infringes on the rights of an individual, that individual would have a right to explanation of why the technology provided an outcome as such. Today this is largely embodied through the Recital 71. This provision is not per se enforceable, though each state can implement its own policy on this point. This presented a major problem for technology driven corporations and entities. Although, in some cases it would be possible to provide an explanation, most situations would require forensics that would transpire over several months. An enormous amount of resources would be spent on providing explanations, rather than developing and improving on existing technology. This is still seen as a problem. Although, the Singapore framework does not require corporations to be subject to such a high degree of scrutiny, it does it recommend that an internal governance framework be established to monitor potential infringement of rights. Additionally, there are also mentions of fairness within the systems. Fairness can be established from a number of angles, including, fairness in how the system conducts decision-making and whether it is equitable across different measures of diversity. Fairness can also be established in terms of equality in access for a system, wherein it is not only relegated to those who have the means of accessing the system. This is coupled with a discussion of how solutions should be “human-centric”. It is our opinion that the idea of being human-centric should be at the very core of all technology related policies that place people at the center of the discourse. However, this is only half of the solution. The principles should also account for human-centric and individual-centric. This would mean that the technology would be used in a beneficial manner for each individual. Since reference has already been made to the OECD Privacy Principles that were originally formulated in the 1980s and subsequently improved upon in 2010. A particularly important facet of the principles is the requirement of an individual being the center of the data discourse, wherein they should have control over how the data is used and whether they should be subject to the technology or not. In particular, provision 7 specifically states the rights that an individual should have in the participation and usage of the data as per the OECD Principles.
With these as the basis for the substantive guidelines, the framework proceeds to define a number of terms for the policy. However, the mission-critical definition in our opinion is AI. Defined as “a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem-solving, perception, learning and planning. AI relies on AI algorithms to generate models. The most appropriate model(s) is/are selected and deployed in a production system.” To define AI as simulating human traits is highly problematic. Although, there is no clear-cut definition for AI, we have on occasion mentioned the difficulties of defining AI. We regularly touch base with this term and attempt to understand what needs to be included, excluded, and improved upon. We do not have time to elaborate upon the entirety of defining AI, but you can take a look at what we have to say on that. To summarize this very point, the existing definition of relying on human traits limits the ambit of what AI really is. There are technologies that go far beyond or fall short of what can be considered human levels of intelligence. Yet, they can undoubtedly be defined as AI. There are primarily three approaches that can be taken to understanding AI – a theoretical understanding, subject oriented understanding, and an application and understanding. By basing it on theory, like the current definition does we can establish a basis for what AI means. Individuals like John McCarthy believe that any intelligent functioning of computing systems can be constituted as AI. It should just involve the ability of providing intelligent solutions, beyond simple computational functioning. Alternatively, a subject orientation of AI would mean orienting the definition of AI to a given context. For example, if a legislation or framework is developed for regulating autonomous vehicles, identifying AI in that context would be with reference to autonomous vehicles and how technology can be used to autonomous to control a vehicle. Finally, an application and understanding of the technology would see how AI as a technology can be used as a means to achieve a particular and based on certain learnt knowledge it can have success.
This is followed by the actual model of governance, which addresses four points:
- internal governance structure and measures
- determining AI decision-making model
- operations management
- customer relationship management
The first point offers a solution on how entities can ensure that within their internal administration they can effectively implement AI that is accountable and transparent. Most importantly it highlights, as part of risk management, certain areas that need to be focused on. These include the management of data, proper monitoring and transfer of knowledge wherever necessary. It establishes a system for maintenance. One angle of this framework can be seen as integrated within the existing employment hierarchy. It would just imply an additional set of tasks integrated within the work flow, which actually may benefit corporations in the long run. However, if this requires a separate governance system to be developed internally, it may prove highly burdensome. This would especially affect smaller entities that may not be able to effectively carry out these due diligence mechanisms. This by no means excludes anyone from developing AI without ensuring that it is accountable and fair in its operation. It does raise the question of feasibility and long-term efficiency, and whether or not it would be possible to implement these requirements across-the-board.
However, undoubtedly one of the most significant sections of the framework is its discussion on a decision-making model. It highlights a number of important points that need to be verified along with risk assessment. When developing models for AI, wherein data is processed in order to form the basis for an application to conduct its operation, they highlight how the idea of keeping humans within the picture of development can be maintained at all times. They even offer a testing framework, wherein the best model, with the least inherent bias and inefficiency can be chosen.
This is taken further by a method of operations management, wherein a specific method of implementing evaluation of decision-making is proposed. They highlight the importance of cleaning data, developing a model based on algorithms, and finally choosing a model that brings the best data to the fore. They highlight how the best data for ensuring minimal bias can be maintained, based on the understanding of how the data was developed, and maintaining a consistent quality. They highlight two types of bias that arise within the system – selection bias and measurement bias. Selection bias is based on using data of a particular group to the exclusion of others, thereby weighing the system in favor of a particular group. This can be based off of omission, wherein a certain group is excluded from the data set – whether it be based on race, gender, ethnicity, class and so on. Additionally, stereotypes can also set in within the data, wherein a particular group that is represented by a particular set of data may be overwhelmingly associated with a particular trait and thereby skewing the perception in that direction. Additionally, measurement bias occurs when there is an intentional manipulation of data, such that the outcome of the system can be contaminated. They recommend a constant reviewing and usage of multiple data sets. This is no doubt one of the most important points that is required for any development of AI and should become a necessary integration within systems.
This is coupled with a discussion on whether the outcomes of these systems can be explainable. They recommend that the system should provide and produce consistent results in a given scenario, to ensure confidence within the system. They provide for a number of tests, including, whether results can be produced consistently, whether they are fair, how exceptions are handled, and ensuring changes are monitored over time. However, whether this accountability and explanation as required are to be monitored by a regulatory authority or not is still not clear. This somewhat conflicts with existing privacy discourse, wherein data of people would need to be analyzed and it could potentially be exposed if a standard of protection in monitoring is not maintained. The even recommend traceability, which would imply a consistent maintenance of logs. How data should be made accessible remains a serious question and would conflict with existing notions of the OECD Principles. The tests that have been mentioned are undoubtedly valuable, but greater clarity needs to be provided to maintain systems. Additionally, since many of these systems are based on statistical models that may be trained from time to time, whether they can be replicated consistently is a difficult question. There would be regular changes and refinement. What is more important is the fairness of the system, rather than ensuring consistency. Each system would have a margin of error, which may vary depending on how the system operates. For example, in algorithms for applying filters to an environment would not necessarily need to have as high a standard of consistency as say an autonomous vehicle. The recommendation of maintaining in audit trail might also require clarity on whether such a process should happen internally, or whether independent and autonomous third party reviewers need to be employed. Additionally, clear conditions should be established as to when auditing needs to take place, as entities would be skeptical of trusting there are intellectual labor with a third-party corporation. Naturally, the recommendation of model tuning is a valuable one and companies should strive to improve their usage of data.
Finally, the framework recommends better consumer relationship management. Stronger systems of disclosure and transparency are recommended. However, the recommendations are highly general. They cannot easily be replicated across-the-board, since the examples provided are highly contextual. Each entity would need to consider and evaluate how it can improve transparency. It is a largely subjective exercise, which would need to be handled by the relevant PR department of each entity. Some settlor mentions, include, review of existing decisions by AI by the production of additional data. This is essentially equivalent to a right of hearing. Additionally, recommendations against using AI for sensitive matters, wherein the interaction could potentially be frustrating is also valuable. This coupled with the recommendation of the OECD Principles of the ability to opt out establish a valuable framework, that resembles a business model as opposed to a legal framework. There is no doubt that this has greater value and should be considered by entities developing AI, largely due to the benefits it offers. Coupled with a system of feedback and review, it is a fairly holistic consumer oriented framework.
Overall, the framework presents an important first step for any AI oriented system. Naturally, there is plenty of room for improvement. The recommendations made are broad and do not take into account specific nuances of particularly sensitive forms of AI. Although, it must be noted that the ambit of the framework does not allow for such specificity. Certain improvements, particularly with reference to definitions and some degree of clarity on internal government structures and operation management can go a long way in developing a framework that can easily be integrated within existing development work cycles. It establishes a promising future for AI, people, and society.