2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8622525
|View full text |Cite
|
Sign up to set email alerts
|

FairGAN: Fairness-aware Generative Adversarial Networks

Abstract: Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utili… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
189
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 271 publications
(189 citation statements)
references
References 17 publications
0
189
0
Order By: Relevance
“…Generating data with unseen class, domain combi-nations The second family generates data samples associated with unseen class, domain combinations, such as ELEGANT [35], DNA-GAN [34], Multi-Level Variational Autoencoder (ML-VAE) [5], CausalGAN [14], Res-GAN [27], SaGAN [40], among others. FML methods Fairness GAN [26] and FairGAN [36] also fall into this category. These methods generate synthetic data, then ordinary models can be trained on both real and the generated data.…”
Section: Related Workmentioning
confidence: 99%
“…Generating data with unseen class, domain combi-nations The second family generates data samples associated with unseen class, domain combinations, such as ELEGANT [35], DNA-GAN [34], Multi-Level Variational Autoencoder (ML-VAE) [5], CausalGAN [14], Res-GAN [27], SaGAN [40], among others. FML methods Fairness GAN [26] and FairGAN [36] also fall into this category. These methods generate synthetic data, then ordinary models can be trained on both real and the generated data.…”
Section: Related Workmentioning
confidence: 99%
“…Disparate treatment, a direct form of discrimination, results from a deliberated use of the sensitive attribute and can be avoided by removing it from the data prior to training the model [12]. Even when trained without the sensitive attribute, the predictions may still be discriminatory, leading to an unfair treatment of protected groups [12], [13]. This red-lining effect is due to the presence of features highly associated with the sensitive attribute [12], [13] and is linked to disparate impact.…”
Section: B Fairness Conceptsmentioning
confidence: 99%
“…Even when trained without the sensitive attribute, the predictions may still be discriminatory, leading to an unfair treatment of protected groups [12], [13]. This red-lining effect is due to the presence of features highly associated with the sensitive attribute [12], [13] and is linked to disparate impact. This indirect form of discrimination is not illegal in itself, as long as objective and reasonable justifications for it can be given [14], [15].…”
Section: B Fairness Conceptsmentioning
confidence: 99%
See 2 more Smart Citations