Anti-discrimination Laws, AI, and Gender Bias : A Case Study in Non-mortgage Fintech Lending
Problem definition: We use a realistically large, publicly available dataset from a global fintech lender to simulate the impact of different anti-discrimination laws, and their corresponding data management and model building regimes, on gender-based discrimination in the non-mortgage fintech lending setting. Academic/Practical Relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature rich, highly multi-collinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different anti-discrimination regimes, whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available dataset to simulate different anti-discrimination regimes, and measure their impact on model discrimination, predictive quality, and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination, and slightly decrease firm profitability. We observe ML models are less discriminatory, of better predictive quality, and more profitable compared to traditional statistical models like logistic regression. Unlike omitted variable bias which drives discrimination in statistical models, ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down-sampling the training data to rebalance gender, gender-aware hyperparameter tuning, and up-sampling the training data to rebalance gender, all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination, with negligible impact on predictive quality, and a slight increase in firm profitability. Managerial Implications: A rethink is required of the anti-discrimination laws, specifically with respect to the collection and use of protected attributes for machine learning models. Firms should be able to collect protected attributes to, at minimum, measure discrimination, and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms
Year of publication: |
[2021]
|
---|---|
Authors: | Kelley, Stephanie ; Ovchinnikov, Anton |
Publisher: |
[S.l.] : SSRN |
Saved in:
freely available
Extent: | 1 Online-Ressource (44 p) |
---|---|
Type of publication: | Book / Working Paper |
Language: | English |
Notes: | Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments September 27, 2021 erstellt |
Other identifiers: | 10.2139/ssrn.3719577 [DOI] |
Source: | ECONIS - Online Catalogue of the ZBW |
Persistent link: https://www.econbiz.de/10013224067
Saved in favorites
Similar items by person
-
Kelley, Stephanie, (2022)
-
Jenkin, Tracy, (2023)
-
Kelley, Stephanie, (2023)
- More ...