Bias and Discrimination in Ml-Based Systems of Administrative Decision-Making and Support
In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy’s life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.[1] The documentary, Trials of Gabriel Fernandez in 2020,[2] has discussed the Allegheny Family Screening Tool (ACFST) , implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan , the Centre for Social Data Analytics co-director, and the Children’s Data Network members, with Emily Putnam-Hornstein , established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse . They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.[3] This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future. *Trang Anh MAC, LLM. Digital Law, University of Paris XII Est-Créteil, reporter at AstraIA Gear. This paper is the English version of her master thesis, under supervision of Dr. Laurie MARGUET and Prof. Florent MADELAINE [1] A. Reyes-Velarde, Charges dismissed against social workers linked to Gabriel Fernandez’s killing, Los Angeles Times, 16 Jul 2020, available online at https://www.latimes.com/california/story/2020-07-15/charges-against-the-social-workers-linked-to-gabriel-fernandez-killing-will-be-dropped[2] https://www.imdb.com/title/tt11822998/[3] N. LaGrone, Can AI Reduce Harm to Children?: Gabriel Fernandez and the Case for Machine Learning, 9 April, 2020, available online at https://www.azavea.com/blog/2020/04/09/can-ai-reduce-harm-to-children
Year of publication: |
[2023]
|
---|---|
Authors: | MAC, Trang Anh |
Publisher: |
[S.l.] : SSRN |
Subject: | Management-Informationssystem | Management information system | Entscheidung | Decision | Diskriminierung | Discrimination | Systematischer Fehler | Bias |
Saved in:
freely available
Saved in favorites
Similar items by subject
-
Ahsen, Mehmet Eren, (2019)
-
Debiasing investors with decision support systems : an experimental investigation
Bhandari, Gokul, (2008)
-
Chen, Zhe, (2015)
- More ...
Similar items by person
-
Regionalism : Is it Essential to the Advancement of International Arbitration in ASEAN?
MAC, Trang Anh, (2020)
- More ...