2022
DOI: 10.48550/arxiv.2202.01034
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Maintaining fairness across distribution shift: do we have viable solutions for real-world applications?

Abstract: Fairness and robustness are often considered as orthogonal dimensions when evaluating machine learning models. However, recent work has revealed interactions between fairness and robustness, showing that fairness properties are not necessarily maintained under distribution shift. In healthcare settings, this can result in e.g. a model that performs fairly according to a selected metric in "hospital A" showing unfairness when deployed in "hospital B". While a nascent field has emerged to develop provable fair a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Gianfrancesco et al (2018) performed a similar analysis for models operating on electronic health records. When evaluating machine learning systems in terms of certain fairness criteria, it is important to keep in mind that ensuring fairness in a source domain does not guarantee fairness in a different target domain under significant distribution shifts (Schrouff et al, 2022). Last but not least, there are multiple definitions of fairness in recent literature and different fairness metrics are often at odds with each other as noted by Ricci Lara et al (2022).…”
Section: Histopathologymentioning
confidence: 99%
“…Gianfrancesco et al (2018) performed a similar analysis for models operating on electronic health records. When evaluating machine learning systems in terms of certain fairness criteria, it is important to keep in mind that ensuring fairness in a source domain does not guarantee fairness in a different target domain under significant distribution shifts (Schrouff et al, 2022). Last but not least, there are multiple definitions of fairness in recent literature and different fairness metrics are often at odds with each other as noted by Ricci Lara et al (2022).…”
Section: Histopathologymentioning
confidence: 99%
“…The degraded performance of fair models under a distribution shift would trigger new bias and discrimination issues. There are some works [Rezaei et al, 2021, Schrouff et al, 2022, Singh et al, 2021, Giguere et al, 2022] that aim to solve fairness under various distribution shifts. For example, Rezaei et al [2021] explore the fairness under covariate shift, where the inputs change while the label distribution is in-distribution.…”
Section: Related Workmentioning
confidence: 99%
“…Schrouff et al [163] evaluates how realistic the assumptions made by Singh et al [162] as well as other works are [80,164]. They categorize distribution shifts into four categories: demographic shift, covariate shift, label shift, and compound shift, independently of the causal graph considered.…”
Section: Fairness Under Distribution Shiftsmentioning
confidence: 99%