2021
DOI: 10.48550/arxiv.2103.04243
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Estimating and Improving Fairness with Adversarial Learning

Xiaoxiao Li,
Ziteng Cui,
Yifan Wu
et al.

Abstract: Fairness and accountability are two essential pillars for trustworthy Artificial Intelligence (AI) in healthcare. However, the existing AI model may be biased in its decision marking. To tackle this issue, we propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system. Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(24 citation statements)
references
References 25 publications
0
24
0
Order By: Relevance
“…8. Application of counter-measures: If biases are identified during any stage of development or testing of AI, mitigation measures should be investigated and evaluated, including (1) pre-processing approaches to improve the training dataset through re-sampling (under-or over-sampling), data augmentation (image synthesis using adversarial learning) or sample weighting to neutralise discriminatory effects; (2) in-processing approaches that modify the learning algorithm in order to remove discrimination during the model training process, such as by adding explicit constraints in the loss functions to minimise the performance difference between subgroups of individuals (e.g., learning bias-free representations via adversarial loss [96]); and (3) post-processing approaches to correct the outputs of the AI algorithm depending on the individual's group, such as by using the equalised odds post-processing technique [130]. All these techniques should be thoroughly evaluated to ensure their positive impact on fairness.…”
Section: Transparency Of Fairnessmentioning
confidence: 99%
“…8. Application of counter-measures: If biases are identified during any stage of development or testing of AI, mitigation measures should be investigated and evaluated, including (1) pre-processing approaches to improve the training dataset through re-sampling (under-or over-sampling), data augmentation (image synthesis using adversarial learning) or sample weighting to neutralise discriminatory effects; (2) in-processing approaches that modify the learning algorithm in order to remove discrimination during the model training process, such as by adding explicit constraints in the loss functions to minimise the performance difference between subgroups of individuals (e.g., learning bias-free representations via adversarial loss [96]); and (3) post-processing approaches to correct the outputs of the AI algorithm depending on the individual's group, such as by using the equalised odds post-processing technique [130]. All these techniques should be thoroughly evaluated to ensure their positive impact on fairness.…”
Section: Transparency Of Fairnessmentioning
confidence: 99%
“…In addition to overall model performance for different patient groups, we also assess the fairness of different models by reporting the equal opportunity difference (EOD) which measures the difference in TPR (i.e., Sensitivity) for the privileged and under-privileged groups following the evaluation protocol in [19,46,37]. We choose to use the TPR gap as our fairness metric based on the needs of the clinical diagnostic setting-high TPR disparity indicates that sick members from one demographic group would not be given correct diagnoses at the same rate as the general population, which can be dangerous for clinical deployment.…”
Section: Fairness Evaluationmentioning
confidence: 99%
“…Aside from imbalance of labels, more insidious forms of imbalance such as that of race [82] or gender [83] of patients are easily omitted in studies. This leads to fairness problems in real world applications as underrepresenting such categories in the training set will hurt performance on these categories in the real world (population shift) [84]. Because of their potential to generate synthetic data, GANs are a promising solution to the aforementioned problems and have already been thoroughly explored in Computer Vision [85,86].…”
Section: Imbalanced Data and Fairnessmentioning
confidence: 99%
“…Towards the goal of a more diverse distribution of data with respect to gender and race, similar principles can be applied. For instance, Li et al [84] proposed an adversarial training scheme to improve fairness in classification of skin lesions for underrepresented groups (age, sex, skin tone) by learning a neutral representation using an adversarial bias discrimination loss. Fairness imposing GANs can also generate synthetic data with a preference for underrepresented groups, so that models may ingest a more bal-anced dataset, improving demographic parity without excluding data from the training pipeline.…”
Section: Imbalanced Data and Fairnessmentioning
confidence: 99%