2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00842
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Fair Representations in the Data Domain

Abstract: Interpretability and fairness are critical in computer vision and machine learning applications, in particular when dealing with human outcomes, e.g. inviting or not inviting for a job interview based on application materials that may include photographs. One promising direction to achieve fairness is by learning data representations that remove the semantics of protected characteristics, and are therefore able to mitigate unfair outcomes. All available models however learn latent embeddings which comes at the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
90
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 99 publications
(90 citation statements)
references
References 24 publications
0
90
0
Order By: Relevance
“…In this work, we benchmark our model on widely adopted datasets [e.g., , Ricanek and Tesafaye (2006)] that are annotated with binary gender labels. In particular, the datasets are annotated with the sex labels 'Male' and 'Female', and therefore it is common practice in both the face analysis (Ng et al 2012;Dantcheva et al 2015) and fairness literature (Buolamwini and Gebru 2018;Quadrianto et al 2019;Zhao et al 2017;Hendricks et al 2018) to use these available labels under this categorization. As such, we can only address gender bias within this imposed binary classification paradigm.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this work, we benchmark our model on widely adopted datasets [e.g., , Ricanek and Tesafaye (2006)] that are annotated with binary gender labels. In particular, the datasets are annotated with the sex labels 'Male' and 'Female', and therefore it is common practice in both the face analysis (Ng et al 2012;Dantcheva et al 2015) and fairness literature (Buolamwini and Gebru 2018;Quadrianto et al 2019;Zhao et al 2017;Hendricks et al 2018) to use these available labels under this categorization. As such, we can only address gender bias within this imposed binary classification paradigm.…”
Section: Methodsmentioning
confidence: 99%
“…These methods employ techniques from domain adaptation to learn a representation that minimizes classification loss while being invariant to the sensitive attribute. In the lat-ter category, Sattigeri et al (2018) extend AC-GAN (Odena et al 2017) to generate a fair dataset, while Quadrianto et al (2018) use an autoencoder to remove sensitive information from images. In this work, we introduce an image-to-image translation model to augment the training set, and thus, our framework is most closely related to Quadrianto et al (2018).…”
Section: Fairness-aware Learning and Face Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…There are several ways to enforce fairness in machine learning models: as a pre-processing step (Kamiran and Calders, 2012 ; Zemel et al, 2013 ; Louizos et al, 2016 ; Lum and Johndrow, 2016 ; Chiappa, 2019 ; Quadrianto et al, 2019 ), as a post-processing step (Feldman et al, 2015 ; Hardt et al, 2016 ), or as a constraint during the learning phase (Calders et al, 2009 ; Zafar et al, 2017a , b ; Donini et al, 2018 ; Dimitrakakis et al, 2019 ). Our method enforces fairness during the learning phase (an in-processing approach) but, unlike other approaches, we do not cast fair-learning as a constrained optimization problem.…”
Section: Related Workmentioning
confidence: 99%