2020
DOI: 10.48550/arxiv.2007.02561
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning from Failure: Training Debiased Classifier from Biased Classifier

Abstract: Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased. While previous work tackles this issue with domain-specific knowledge or explicit supervision on the spuriously correlated attributes, we instead tackle a more challenging setting where such information is unavailable. To this end, we first observe that neural networks learn to rely on the spurious correlation only when it is "easier" to learn than the desired k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
37
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(38 citation statements)
references
References 19 publications
1
37
0
Order By: Relevance
“…Learning to reweight training samples are widely used in Curriculum learning (Zhou, Wang, and Bilmes 2020), hard-sample Mining (Lin et al 2017), domain generalization (Sagawa et al 2019;Arjovsky et al 2019;Krueger et al 2021), debiasing (Nam et al 2020), model calibration (Mukhoti et al 2020), adversarial defense (Zhang et al 2020), etc. Our method is closely related to Focal Loss (Lin et al 2017) and worst case optimization (Sagawa et al 2019).…”
Section: Our Methodsmentioning
confidence: 99%
“…Learning to reweight training samples are widely used in Curriculum learning (Zhou, Wang, and Bilmes 2020), hard-sample Mining (Lin et al 2017), domain generalization (Sagawa et al 2019;Arjovsky et al 2019;Krueger et al 2021), debiasing (Nam et al 2020), model calibration (Mukhoti et al 2020), adversarial defense (Zhang et al 2020), etc. Our method is closely related to Focal Loss (Lin et al 2017) and worst case optimization (Sagawa et al 2019).…”
Section: Our Methodsmentioning
confidence: 99%
“…Recently, several works focus on learning with weak or even no bias supervision [27,35,51,27]. Learning from Failure (LfF) puts more weight on failure samples [35].…”
Section: Debiasing and Fairnessmentioning
confidence: 99%
“…Several methods have been proposed to learn to remove the dataset bias [25,1,41,56,35,5]. Among them, some methods regularize the model to not learn bias with additional regularization terms [35,5], and others learn to eliminate the learned bias information by adversarial learning [25,41,56].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Interpretability is essential in many fields to figure out if there are any biases from the DN. For example, in [25], the authors performed experiments on an action recognition dataset where the test accuracy is far from stellar due to the action recognition biases. The Soft VQ is denoted as a sigmoid function applied to the layer.…”
Section: Introductionmentioning
confidence: 99%