2020
DOI: 10.48550/arxiv.2003.06430
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Unbiased Representations via Mutual Information Backpropagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Training σ = 0.020 σ= 0.025 σ = 0.030 σ = 0.035 σ = 0.040 σ = 0.045 σ = 0.050 ERM (λ = 0.0) 0.476 ± 0.005 0.542 ± 0.004 0.664 ± 0.001 0.720 ± 0.010 0.785 ± 0.003 0.838 ± 0.002 0.870 ± 0.001 Ragonesi [38] 0.592 ± 0.018 0.678 ± 0.015 0.737 ± 0.028 0.795 ± 0.012 0.814 ± 0.019 0.837 ± 0.004 0.877 ± 0.010 Zhang et al [50] 0.584 ± 0.034 0.625 ± 0.033 0.709 ± 0.027 0.733 ± 0.020 0.807 ± 0.013 0.803 ± 0.027 0.831 ± 0.027 Kim et Our experiments on real-world data are performed on five data sets. In three data sets, the sensitive and the outcome true value are both continuous: the US Census data set [44], the Motor data set [42] and the Crime data set [15].…”
Section: Color Variancementioning
confidence: 99%
See 3 more Smart Citations
“…Training σ = 0.020 σ= 0.025 σ = 0.030 σ = 0.035 σ = 0.040 σ = 0.045 σ = 0.050 ERM (λ = 0.0) 0.476 ± 0.005 0.542 ± 0.004 0.664 ± 0.001 0.720 ± 0.010 0.785 ± 0.003 0.838 ± 0.002 0.870 ± 0.001 Ragonesi [38] 0.592 ± 0.018 0.678 ± 0.015 0.737 ± 0.028 0.795 ± 0.012 0.814 ± 0.019 0.837 ± 0.004 0.877 ± 0.010 Zhang et al [50] 0.584 ± 0.034 0.625 ± 0.033 0.709 ± 0.027 0.733 ± 0.020 0.807 ± 0.013 0.803 ± 0.027 0.831 ± 0.027 Kim et Our experiments on real-world data are performed on five data sets. In three data sets, the sensitive and the outcome true value are both continuous: the US Census data set [44], the Motor data set [42] and the Crime data set [15].…”
Section: Color Variancementioning
confidence: 99%
“…Adel et al [1] learn a fair representation by inputting it to an adversary network, which is prevented from predicting the sensitive attribute (upper left in Figure 1 in appendix). Other papers minimize the mutual information between the representation and the sensitive attribute: Kim et al [26] rely on adversarial training with a discriminator detecting the bias, while Ragonesi et al [38] rely on an estimation by neural network of mutual information [6] (lower left in Figure 1 in appendix).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations