Proceedings of the Conference on Fairness, Accountability, and Transparency 2019
DOI: 10.1145/3287560.3287589
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of fairness-enhancing interventions in machine learning

Abstract: Computers are increasingly used to make decisions that have significant impact in people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers and predictors have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
254
2
3

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 404 publications
(270 citation statements)
references
References 19 publications
10
254
2
3
Order By: Relevance
“…Although being aware of the drawbacks of fairness metrics like statistical parity, as these have been widely discussed in the literature [10], [23], we wanted to perform a not so common analysis of fairness that allowed us to compare the unfairness found in the predictions made by an ML model to that found in the data used to train that model. Nevertheless, our experiments have shown the brittleness of these metrics, as even those which were expected to show similar behaviours, such as CVS and DI, sometimes presented contradictory results [16]. This gives further confirmation that there is still room for improvement and progress when it comes to defining new fairness metrics.…”
Section: Discussion and Recommendationssupporting
confidence: 66%
See 2 more Smart Citations
“…Although being aware of the drawbacks of fairness metrics like statistical parity, as these have been widely discussed in the literature [10], [23], we wanted to perform a not so common analysis of fairness that allowed us to compare the unfairness found in the predictions made by an ML model to that found in the data used to train that model. Nevertheless, our experiments have shown the brittleness of these metrics, as even those which were expected to show similar behaviours, such as CVS and DI, sometimes presented contradictory results [16]. This gives further confirmation that there is still room for improvement and progress when it comes to defining new fairness metrics.…”
Section: Discussion and Recommendationssupporting
confidence: 66%
“…This gives further confirmation that there is still room for improvement and progress when it comes to defining new fairness metrics. Specially in imbalanced scenarios, we believe that adopting more recently proposed fairness metrics based on group-conditioned performance [16] might be a small but crucial step towards achieving this goal. There is also room for improvement when it comes to the definition of individual fairness metrics, which seeks to treat similar individuals in a similar way [27].…”
Section: Discussion and Recommendationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Automatically detecting whether an ML model is biased against certain subgroups often involves segmenting a test set into subgroups and measures model performance on each to identify the subgroups on which the model underperforms [14]. A prerequisite for conducting this bias analysis is having the group membership of the input data.…”
Section: Availability Of Group Membership Of Input Datamentioning
confidence: 99%
“…Similar to prior work on disparate impact [Feldman et al, 2015;Chouldechova, 2017] there is a need to re-balance the distribution of features conditioned on sensitive attributes. Our kernel matching technique deals with covariate shift [Gretton et al, 2009;Cortes et al, 2008], and has been used in domain adaptation (see e.g. [Mansour et al, 2009]) and counterfactual analysis (e.g.…”
Section: Introductionmentioning
confidence: 99%