FTC Discusses Consumer Protection, AI and Algorithms

Apr 09, 2020 | Srivats Shankar

FTC Director of the Bureau of Consumer Protection discusses the implication of AI and algorithms

In a blog post FTC Bureau of Consumer Protection Director, Andrew Smith, discusses the implication of artificial intelligence (AI) and algorithms on consumer protection. In the post-he noted the “potential for unfair or discriminatory outcomes” in perpetuating socioeconomic disparities. Citing the example of AI in health care, he noted a study indicating an AI system that funneled resources to healthier white patients to the detriment of sicker black patients. Acknowledging past concerns of automated decision making, leading to the enactment of legislations like the Fair Credit Reporting Act of 1970 and Equal Credit Opportunity Act of 1974, it was now time to discuss how to improve AI. He primarily focuses on five goals that he believes should be achieved in the development of AI – transparent, explainable, fair, empirically sound, and accountable.

His discussion is divided along these five goals, focusing on how product and service providers can effectuate these rules. The service guidelines on how to develop responsible AI. First, insulting transparency in collecting sensitive data, particularly visual and audio data. This includes explaining how AI is being applied to decision-making and where the test being shared with third-party vendors. Second, explaining decisions to customers. Although, at times there may be multiple factors at play, developers should strive to provide what data was used and how the decision was made rather than simply dismissing user requests without providing any clarification. Additionally, if there is an assignment of scores based on factors, customer should be given a notice that those factors are being used to determine eligibility. Third, ensuring that decisions are fair, the use of the AI should ensure that discrimination against protected classes are not implicitly taking place. Developer should consider the implications automated decisions have and what the outcomes are. Fourth, empirical soundness should be used to ensure that the application of data and development of data models takes place without bias, with the ability to correct errors and ensure that they are reviewed periodically. Finally, there should be accountability in decision-making, cross verifying the applicability of the day dissent – including, how representative the data sentence, does it account for biases, how accurate the predictions are, and does the reliance on it lead to ethical concerns.

This discussion follows the 2016 FTC report on “Big Data” and the FTC hearing and 2018 hearing on AI, algorithms and predictive analysis.

Related

Latest

European Union Considering Establishing International Treaty on Artificial Intelligence

Srivats Shankar | Jul 26, 2020

The European Union Ad Hoc Committee on Artificial Intelligence is considering an international treaty for artificial intelligence in its upcoming session

Department of Transportation Releases Document for Future of Transportation Regulatory Approach

Srivats Shankar | Jul 26, 2020

The US Department of Transportation releases document for facilitating communication between regulators and developers of emerging transportation technology

Amica: Australia’s Solution for Dividing Assets

Srivats Shankar | Jul 23, 2020

Read On

Terminology

Deepfake

/diːpfeɪk/

Artificial General Intelligence

/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/

Artificial Intelligence

/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/

More Terminology

Events

13th International Conference on Agents and Artificial Intelligence – ICAART 2021

INSTICC | 2021-02-04 12:00:00

More Events