2020
DOI: 10.1007/978-3-030-61166-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Risk of Training Diagnostic Algorithms on Data with Demographic Bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 25 publications
0
20
0
Order By: Relevance
“…Indeed, the first issue one encounters is that a large number of candidate measures exist. One can for instance evaluate fairness by comparing standard ML performance metrics across different sub-groups, such as accuracy 10,[12][13][14][15][16] , or AUC ROC (the area under the receiver operating characteristic curve) [8][9][10][14][15][16][17][18][19][20][21][22] , among others. Alternatively, one can choose to employ one of the (no less than ten) different fairnessspecific criteria formulated by the community 23 in order to audit the presence of bias in a given model 16,18 .…”
Section: What Does It Mean For An Algorithm To Be Fair?mentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, the first issue one encounters is that a large number of candidate measures exist. One can for instance evaluate fairness by comparing standard ML performance metrics across different sub-groups, such as accuracy 10,[12][13][14][15][16] , or AUC ROC (the area under the receiver operating characteristic curve) [8][9][10][14][15][16][17][18][19][20][21][22] , among others. Alternatively, one can choose to employ one of the (no less than ten) different fairnessspecific criteria formulated by the community 23 in order to audit the presence of bias in a given model 16,18 .…”
Section: What Does It Mean For An Algorithm To Be Fair?mentioning
confidence: 99%
“…Several studies in recent years have proposed solutions to mitigate bias and develop fairer algorithms 10,11,[14][15][16][17]19,20,24 . There are three main stages at which bias mitigation strategies can be adopted 11 : before, during and after training.…”
Section: Bias Mitigation Strategiesmentioning
confidence: 99%
“…Indeed, a good overall prediction performance can be achieved despite a poor performance on a minority group. Ensuring that a predictor performs well for all subpopulations reduces sensitivity to potential shifts in demographics and is essential to ensure fairness [ 35 ]. For instance, there is a risk that machine-learning analysis of dermoscopic images under-diagnoses malignant moles on skin tones that are typically under-represented in the training set [ 56 ].…”
Section: Other Approaches To Dataset Shiftmentioning
confidence: 99%
“…[Oakden- Rayner et al, 2020, Gianfrancesco et al, 2018, Barocas et al, 2019, Abbasi-Sureshjani et al, 2020, Cirillo et al, 2020.…”
Section: Preferential Sample Selection: a Common Source Of Shiftmentioning
confidence: 99%
“…Indeed, a good overall prediction performance can be achieved despite a poor performance on a minority group. Ensuring that a predictor performs well for all subpopulations reduces sensitivity to potential shifts in demographics and is essential to ensure fairness [Abbasi-Sureshjani et al, 2020]. For instance, there is a risk that machine-learning analysis of dermoscopic images underdiagnoses malignant moles on skin tones that are typically under-represented in the training set Adamson and Smith [2018].…”
Section: Performance Heterogeneity and Fairnessmentioning
confidence: 99%