2019
DOI: 10.1007/978-3-030-11009-3_34
|View full text |Cite
|
Sign up to set email alerts
|

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

Abstract: Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions: 1) An algorithm that can remove multiple sources of variation from the feature represen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
217
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 175 publications
(232 citation statements)
references
References 26 publications
2
217
0
Order By: Relevance
“…These approaches rely on an oracle for a subset of test queries. Rather than relying on an oracle, Alvi et al [1] proposed joint learning and unlearning method to remove bias from neural network embedding. To unlearn the bias, the authors applied confusion loss, which is computed by calculating the cross-entropy between classifier output and a uniform distribution.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These approaches rely on an oracle for a subset of test queries. Rather than relying on an oracle, Alvi et al [1] proposed joint learning and unlearning method to remove bias from neural network embedding. To unlearn the bias, the authors applied confusion loss, which is computed by calculating the cross-entropy between classifier output and a uniform distribution.…”
Section: Related Workmentioning
confidence: 99%
“…As mentioned by Alvi et al in the paper [1], the unsupervised domain adaptation (UDA) problem is closely related to the biased data problem. The UDA problem involves generalizing the network embedding over different domains [8,23,21].…”
Section: Related Workmentioning
confidence: 99%
“…A disentanglement loss (DL) is used to encourage explicit separation of representations-for this we use the confusion loss implemented by [29] (inspired by [30]). This loss is used to assess the amount of spurious variation information left in either feature representation and then remove it (for the identity representation, content information is a spurious variation and vice versa).…”
Section: Loss Functionsmentioning
confidence: 99%
“…But how can age prediction performances be enhanced in this case? With this objective in mind, the analysis of bias in age perception has recently emerged [2], [9]. Can we better understand age perception and their biases so that the findings can be used to regress a better real age estimation?…”
Section: Analysis Of Bias In Age Estimationmentioning
confidence: 99%
“…However, an end-to-end approach for bias removal was not considered. According to Alvi et al [9], training an age predictor on a dataset that is not balanced for gender can lead to gender biased predictions. They presented an algorithm to remove biases from the feature representation, as well as to ensure that the network is blind to a known bias in the dataset.…”
Section: Analysis Of Bias In Age Estimationmentioning
confidence: 99%