Apr 09, 2020 | Srivats Shankar
FTC Director of the Bureau of Consumer Protection discusses the implication of AI and algorithms
Apr 09, 2020 | Srivats Shankar
FTC Director of the Bureau of Consumer Protection discusses the implication of AI and algorithms
In a blog post FTC Bureau of Consumer Protection Director, Andrew Smith, discusses the implication of artificial intelligence (AI) and algorithms on consumer protection. In the post-he noted the “potential for unfair or discriminatory outcomes” in perpetuating socioeconomic disparities. Citing the example of AI in health care, he noted a study indicating an AI system that funneled resources to healthier white patients to the detriment of sicker black patients. Acknowledging past concerns of automated decision making, leading to the enactment of legislations like the Fair Credit Reporting Act of 1970 and Equal Credit Opportunity Act of 1974, it was now time to discuss how to improve AI. He primarily focuses on five goals that he believes should be achieved in the development of AI – transparent, explainable, fair, empirically sound, and accountable.
His discussion is divided along these five goals, focusing on how product and service providers can effectuate these rules. The service guidelines on how to develop responsible AI. First, insulting transparency in collecting sensitive data, particularly visual and audio data. This includes explaining how AI is being applied to decision-making and where the test being shared with third-party vendors. Second, explaining decisions to customers. Although, at times there may be multiple factors at play, developers should strive to provide what data was used and how the decision was made rather than simply dismissing user requests without providing any clarification. Additionally, if there is an assignment of scores based on factors, customer should be given a notice that those factors are being used to determine eligibility. Third, ensuring that decisions are fair, the use of the AI should ensure that discrimination against protected classes are not implicitly taking place. Developer should consider the implications automated decisions have and what the outcomes are. Fourth, empirical soundness should be used to ensure that the application of data and development of data models takes place without bias, with the ability to correct errors and ensure that they are reviewed periodically. Finally, there should be accountability in decision-making, cross verifying the applicability of the day dissent – including, how representative the data sentence, does it account for biases, how accurate the predictions are, and does the reliance on it lead to ethical concerns.
This discussion follows the 2016 FTC report on “Big Data” and the FTC hearing and 2018 hearing on AI, algorithms and predictive analysis.
Srivats Shankar | May 02, 2022
The European Parliament adopted the recommendations of the Special Committee on Artificial Intelligence in the Digital Age providing a roadmap until the year 2030 regarding its impact on climate change, healthcare, and labor relations
Srivats Shankar | Mar 26, 2022
European Union reaches political agreement to introduce Digital Markets Act.
Maathangi Hariharan | Mar 22, 2021
/diːpfeɪk/
/ˌɑːtɪfɪʃl ˈdʒɛn(ə)r(ə)l ɪnˈtelɪɡəns/
/ˌɑːtɪfɪʃl ɪnˈtelɪɡəns/