Google Accused of Discriminatory Use of Machine Learning to Target Users on YouTube

Jun 23, 2020 | Srivats Shankar

Google has been accused of using artificial intelligence and machine learning to target content creators based on their race, gender, and identity

A complaint has been submitted to the US District Court of the Northern District of California, for discriminatory use of machine learning to target content creators based on their race, gender, and identity. The case Newman v. Google LLC, accuses Google of abusing artificial intelligence programs and algorithms to “digitally profile” who have certain viewpoints. The Plaintiff argues that in order to limit their reach YouTube has demonetized, restricted, blocked, suspended, and removed videos from the platform. They further allege that the option is to either to three or refrain from posting videos on certain issues. In particular they argue that videos containing particular references or abbreviations like “BLM”, “KKK”, racial terms, and names of individuals associated with law enforcement, have been greatly limited.

The Plaintiffs argue that the content does not contain hate speech, or any content otherwise prohibited by the YouTube terms of service. To that end they argue that the primary mechanism used by Google to achieve these goals is the application of artificial intelligence that filters content, which the Plaintiffs have labeled as “anticompetitive” and “unlawfully discriminatory” to limit the number of views the content could otherwise receive. Google counters these claims that the algorithms are identity neutral and apply the same standards to everyone. However, the petitioners argued Google conducts manual refused to supplement the filtering of content and the content filtering conducted by algorithms is purely subjective.

The Plaintiffs also argue that Google has a “dysfunctional work environment” that operates in a “restricted mode” affecting viewpoints based on bias, animus, and discrimination towards the viewpoint expressed by the Plaintiffs. They argue that harassment, threats, blacklisting, and discipline is met it out to any employee that does not comply. They also argue that this activity is done to promote content that advertises prefer, particularly targeting LGBTQ videos to reach specific demographics.

As of June 17, summons have been delivered to Google and its parent company Alphabet to respond to the complaint.

Filed Under

Socialising

Google

Related

Latest

The “Top Ten Tech” priorities: Britain’s new AI strategy

Maathangi Hariharan | Mar 22, 2021

The Quad Critical and Emerging Technologies Working Group: tech for an open and inclusive Indo-Pacific

Maathangi Hariharan | Mar 16, 2021

European Union Considering Establishing International Treaty on Artificial Intelligence

Srivats Shankar | Jul 26, 2020

The European Union Ad Hoc Committee on Artificial Intelligence is considering an international treaty for artificial intelligence in its upcoming session

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology