2018
DOI: 10.48550/arxiv.1809.02169
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…In many computer vision applications, there are some works that seek to introduce fairness into networks and mitigate data bias. These are respectively classified as unbalanced-training [40,10,35,52], attribute suppression [5,32,36,31] and domain adaptation [24,23,49,43]. By learning the underlying latent variables in an entirely unsupervised manner, Debiasing Variational Autoencoder (DB-VAE) [6] re-weighted the importance of certain data points while training.…”
Section: Debiased Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…In many computer vision applications, there are some works that seek to introduce fairness into networks and mitigate data bias. These are respectively classified as unbalanced-training [40,10,35,52], attribute suppression [5,32,36,31] and domain adaptation [24,23,49,43]. By learning the underlying latent variables in an entirely unsupervised manner, Debiasing Variational Autoencoder (DB-VAE) [6] re-weighted the importance of certain data points while training.…”
Section: Debiased Algorithmsmentioning
confidence: 99%
“…Recently, with the emergence of deep convolutional neural networks (CNN) [26,42,45,20,21], the performance of face recognition (FR) [48,44,41] is dramatically boosted. However, as its wider and wider application, its potential for unfairness is raising alarm [9,5,1,2]. For instance, Amazons Rekognition Tool incorrectly matched the photos of 28 U.S. congressmen with the faces of criminals, especially the error rate was up to 39% for non-Caucasian people; according to [15], a year-long research investigation across A major driver of bias in face recognition, as well as other AI tasks, is the training data.…”
Section: Introductionmentioning
confidence: 99%
“…This realworld gender distribution skew becomes part of the data that trains models to recognize or reason about these activities. 1 Naturally, these models then learn discriminative cues which include the gender of the actors. In fact, the gender correlation may even become amplified in the model, as Zhao et al [46] demonstrates.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we set out to provide an in-depth look at this problem of training visual classifiers in the presence of spurious correlations. We are inspired by prior work on machine learning fairness [45,46,36,1] and aim to build a unified understanding of the proposed techniques. Code is available at https://github.com/ princetonvisualai/DomainBiasMitigation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation