Facial Recognition Technology: Assessing the Spectrum of Regulatory Solutions – Part I

May 08, 2019 | Maathangi Hariharan

Assessing various solutions to ensure fairness in facial recognition

Recently, Amazon’s Rekognition, a facial recognition software was on the receiving end of severe criticism from various civil rights activists, such as the American Civil Liberties Union [‘ACLU’] for displaying gender and racial bias in recognizing human images. Researchers have found that the algorithm is biased and displays difficulty in being able to recognize women, and dark skinned faces. Simply put, this means that the error rates in recognizing coloured and female faces displayed by the algorithm are much higher than in recognizing male, and white faces. MIT’s latest study on Rekognition revealed some alarming numbers –  the algorithm has a 31.4% error rate when performed on the darker female subgroup, a 0% error rate when performed on the white male subgroup, and 18.73% error rate when performed on females. These numbers are compared to its peers such as IBM and Face++, substantially high values. IBM for instance, according to the same MIT study shows a total 16.97% error rate in detecting dark skinned females, 0.26% error rate in detecting white skinned males and 9.36% error rate when performed on females. The concerns surrounding Rekognition do not end here – the purposes for which Rekognition are being used are raising more alarm bells than before. It is, as per recent media reports being sold to law enforcement agencies such as the Immigration and Customs Authorities and Police Force, which deploy this in their surveillance activities. Several Congressmen in the United States have also written to Jeff Bezos, expressing their concern over the sale of biased facial recognition software. Bezos has not only not responded sufficiently to previous letters written by Congressmen, but has also retorted against the studies that find Rekognition to display gender and racial bias arguing that detecting and recognizing facial expression and gender are different from one another, and the terms and conditions contained therein are sufficient to address these concerns.

This not the first time Rekognition has been criticised for displaying gender and racial bias in recognizing faces, nor is it the first to be criticized by civil rights activists. Prior to the MIT Study, in early 2018, Rekognition was tested by the ACLU, where they decided to photograph faces of all members of Congress, and test it against 28,000 mugshots publicly available. Rekognition came up with 28 false matches including misidentification of color, which reveals lower confidence rates than expected for the algorithm.

Racist, and biased algorithms are not unique to Rekognition. Google, for instance in 2015 was required to publicly apologize after its facial recognition technology [‘FRT’] automatically identified and tagged African Americans as gorillas. Unfortunately, Google’s plan of action was simply to make Google Photos blind to primates entirely, instead of ensuring the error does not occur again.

The debate around gender and racial bias in FRT surfaces at a time when there is a global movement towards addressing concerns of algorithmic bias and fairness in machine learning. The increasing proliferation of intelligent machines across various industries such as banking, transportation and healthcare, coupled with increasing use of intelligent machines by the government primarily for the purpose of surveillance have raised numerous red flags. In regulating FRT, policymakers and technologists are thus left with two problems viz. increased usage of FRT for governmental surveillance, and race and gender bias in FRT, that produces erroneous and inaccurate results.

Regulatory measures: Exploring solutions

There are two alternate solutions that one can refer to as the initial point from which regulating FRT can be addressed. These solutions unsurprisingly sit on opposite ends – the technologists suggest intervention and regulation of FRT, while the governing body suggests a complete ban on the use of FRT. Microsoft strongly affirms the need for legislative intervention and regulation, while the San Franciscan government has opted to ban FRT altogether, save for extremely limited purposes. Along this spectrum of solutions, is a third option, which has been proposed by IBM. IBM has decided to release curated, racially diverse datasets into the public domain, as a solution to ethical and racial biases inherent in FRT. In Part I, I shall look at solutions presented by IBM and Microsoft respectively. In Part II, I shall look at government response, particularly in the United States to the increasing use of FRT.

IBM’s curated datasets

A different solution from the technologist’s end worth notice is IBM’s decision (one can also access the dataset here) to release specially curated, racially and ethnically diverse datasets to address algorithmic bias, in public domain. Diversity in facial image datasets is simply not limited to variation in skin tones, age, gender and ethnicity. It extends to more nuanced features such as face symmetry, ratios of various facial features such as eyes to nose, eyes to mouth etc.

One of the most important tasks presented to deep learning researchers is curating diverse datasets, that are void of personal biases and influences. Such diverse datasets go a long way in ensuring accuracy of prediction, reduce error rates and ensure fairness. IBM has released two datasets in this regard. The first dataset, or Diversity in Faces [DiF] contains a million images, and is a ‘dataset of annotations’ that has been created by weighing sample images from the publicly available Yahoo Flickr CC-100M data set. According to IBM, the DiF datasetprovides a comprehensive set of annotations of intrinsic facial features that includes cranio-facial distances, areas and ratios, facial symmetry and contrast, skin color, age and gender predictions, subjective annotations, and pose and resolution,” thus focusing on diversity within intrinsic facial features.

A quick look into the history of facial datasets shows that the DiF is the first dataset to contain 1 million images – all facial datasets created up until now did not have more than 700,000 images/photos of faces in a dataset [Labeled Faces in the Wild had 13,233 images; CelebA had a dataset with 202,599 images]. The second dataset will contain 36,000 images and is carefully curated in a manner such that it is equally diverse viz. diverse across factors such as race, color, gender and age. The purpose of creating this dataset is to help researchers and engineers in the industry create bias free algorithms at the time of design itself.

From the perspective of the intelligent machines market, intellectual property concerns and maintaining a competitive edge, the decision to introduce such datasets in the public domain is a clear one-off. IBM’s decision to introduce such datasets is a classic instance of self-regulation introduced from within the industry. This is undoubtedly beneficial, for it does not hamper the ability of technology to grow, but merely ensures responsible growth of the same. Self-regulation, however may prove to be insufficient simply because of the knowledge asymmetry between the consumer/data subject and the data aggregator viz. the technology company.

Microsoft’s principles

Microsoft has, in a bid to remain ethically competitive proposed the regulation of FRT, since mere self-regulation may be insufficient to address the growing ethical concerns. It is of the belief that ‘a commercial race to the bottom’ will not serve any benefit to society, rather there is a requirement to build ‘a floor of responsibility’ to support healthy competition. Nadella and team have proposed a set of six principles that must regulate how FRT is developed and used by technologists. These principles in brief are fairness, accountability, transparency, non-discrimination, notice and consent and lastly, lawful surveillance. According to Microsoft, the team is hopeful of the fact that initial legislation incorporating the aforementioned principles can be implemented in society and will serve to balance the commercial uses and social responsibility surrounding FRT.

Microsoft has put forth certain legislative suggestions that merit attention. The first, is to allow third party testing of the software, to determine racial and gender bias. Industry practice, according to Microsoft does not at present permit the same, causing hardship and lack of knowledge of algorithmic bias on the part of the customer. Secondly, it suggests human review of the algorithm’s findings. This has a twofold implication that must necessarily be borne in mind while legislatively deliberating upon the same. The first, is that the while computers are in themselves many a time more accurate than humans, key decisions cannot blindly be left to automated decision making. Discretion and reconsideration is immensely important in such situations. Second, and more importantly, retaining qualified personnel as the ultimate reviewers of automated decisions reinforces the fact that humans will continue to retain supremacy. Retaining human supremacy in the development and use of AI systems has been recognized by researchers, academicians and industry practitioners in deliberating upon, and endorsing the ASILOMAR AI principles. This argument, thus gains importance in a much broader context, while assessing the capabilities and potential of intelligent machines in themselves.

The third suggestion, is with respect to user privacy. Microsoft suggests that user consent and prior notice are to be enforced to ensure there is no breach of privacy. By ensuring prior notice, it suggests that the law enforcement agency/ private entity as the case may be convey clearly to customers by way of a visible notice that FRT is used. However, the shortcoming of this suggestion, is that it is not entirely foolproof. Mere notice, without specifying the purposes for which an individual’s facial image might be used, whether or not it will automatically be stored in their database for future references remain unclear from the suggestions. These are valid concerns, as the privacy of an individual may still stand compromised even if a conspicuous notice is provided. In terms of user consent, Microsoft suggests that users specifically agree to the use of FRT when used for online services or at departmental stores. The shortcoming, once again with such a proposal is firstly, on technical grounds the collection of residual noise is yet to be satisfactorily addressed by technology giants, and secondly this suggestion has no validity when FRT is used by law enforcement agencies. The inherent unequal bargaining power between law enforcement agencies and citizens obviates any chances of arguing in favour of individual privacy.

Drawing from the abovementioned criticism, Microsoft also suggests limited government surveillance by requiring law enforcement agencies to obtain a court order to permit the use of FRT. However, what Microsoft has failed to note is that firstly, is a court order required to be obtained for every instance they wish to use FRT, or once obtained it can be used forever with no further regulation of usage. In proposing this solution, Microsoft has failed to provide an unequivocal practical mechanism. Second, must the applicant for such a Court Order specify the modalities, and purposes for which such technology is to be used is something that remains unclarified. Microsoft, in sum suggests application of the Constitution to facial recognition, akin to the application of the Fourth Amendment to mobile location data as laid down in Carpenter v. US.

Related

Latest

European Union adopts AI Committee Recommendations on AI Act

Srivats Shankar | May 02, 2022

The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations

A picture of a cube representing digital markets

European Union Reaches Agreement on Digital Markets Act

Srivats Shankar | Mar 26, 2022

European Union reaches political agreement to introduce Digital Markets Act.

The “Top Ten Tech” Priorities: Britain’s New AI Strategy

Maathangi Hariharan | Mar 22, 2021

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology