2021
DOI: 10.48550/arxiv.2108.02943
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Learning of Debiased Representations with Pseudo-Attributes

Abstract: Dataset bias is a critical challenge in machine learning, and its negative impact is aggravated when models capture unintended decision rules with spurious correlations. Although existing works often handle this issue using human supervision, the availability of the proper annotations is impractical and even unrealistic. To better tackle this challenge, we propose a simple but effective debiasing technique in an unsupervised manner. Specifically, we perform clustering on the feature embedding space and identif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
(36 reference statements)
0
1
0
Order By: Relevance
“…To address this issue, numerous debiasing frameworks have been proposed for identifying and mitigating the potential risks posed by dataset or algorithmic biases. These frameworks can be categorized into pre-processing (Li and Vasconcelos 2019;Sagawa et al 2020b;Kamiran and Calders 2012), in-processing (Sagawa et al 2020a;Sohoni et al 2020;Wang et al 2019;Zhang, Lemoine, and Mitchell 2018;Gong, Liu, and Jain 2020;Seo, Lee, and Han 2021;Ragonesi et al 2021;Wang et al 2020;Guo et al 2020), and post-processing (Hardt, Price, and Srebro 2016;Zhao et al 2017) (Chu, Kim, and Han 2021;Gong, Liu, and Jain 2020;Ragonesi et al 2021), or robust optimization (Sagawa et al 2020a;Sohoni et al 2020;Seo, Lee, and Han 2021). Postprocessing methods modify the predicted outputs to meet fairness criterion, mainly by calibrating the outputs (Hardt, Price, and Srebro 2016;Zhao et al 2017).…”
Section: Related Workmentioning
confidence: 99%
“…To address this issue, numerous debiasing frameworks have been proposed for identifying and mitigating the potential risks posed by dataset or algorithmic biases. These frameworks can be categorized into pre-processing (Li and Vasconcelos 2019;Sagawa et al 2020b;Kamiran and Calders 2012), in-processing (Sagawa et al 2020a;Sohoni et al 2020;Wang et al 2019;Zhang, Lemoine, and Mitchell 2018;Gong, Liu, and Jain 2020;Seo, Lee, and Han 2021;Ragonesi et al 2021;Wang et al 2020;Guo et al 2020), and post-processing (Hardt, Price, and Srebro 2016;Zhao et al 2017) (Chu, Kim, and Han 2021;Gong, Liu, and Jain 2020;Ragonesi et al 2021), or robust optimization (Sagawa et al 2020a;Sohoni et al 2020;Seo, Lee, and Han 2021). Postprocessing methods modify the predicted outputs to meet fairness criterion, mainly by calibrating the outputs (Hardt, Price, and Srebro 2016;Zhao et al 2017).…”
Section: Related Workmentioning
confidence: 99%