2021
DOI: 10.48550/arxiv.2111.13650
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Latent Space Smoothing for Individually Fair Representations

Abstract: Fair representation learning encodes user data to ensure fairness and utility, regardless of the downstream application. However, learning individually fair representations, i.e., guaranteeing that similar individuals are treated similarly, remains challenging in high-dimensional settings such as computer vision. In this work, we introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data. Our key insight is to leverage recent advances in generative mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 32 publications
(40 reference statements)
0
5
0
Order By: Relevance
“…Next, the generation results were verified through user study that rates the realism of each image. This experiment was designed because ID classifiers for ID-shifting [56], [57] are not normally available. In Table 1, EIF improved by 77% and 83% in realism score than ELIM and CAF, respectively.…”
Section: Rmse-v (↓) Rmse-a (↓) Ccc-v (↑) Ccc-a (↑)mentioning
confidence: 99%
“…Next, the generation results were verified through user study that rates the realism of each image. This experiment was designed because ID classifiers for ID-shifting [56], [57] are not normally available. In Table 1, EIF improved by 77% and 83% in realism score than ELIM and CAF, respectively.…”
Section: Rmse-v (↓) Rmse-a (↓) Ccc-v (↑) Ccc-a (↑)mentioning
confidence: 99%
“…Remark. In the literature, the concepts of fairness are usually directly defined at the model prediction level, where the criterion is whether the model prediction is fair against individual attribute changes [36,33,47] or fair at population level [51]. In this work, to certify the fairness of model prediction, we define a fairness constrained distribution on which we will certify the model prediction (e.g., bound the prediction error), rather than relying on the empirical fairness evaluation.…”
Section: Certified Fairness Based On Fairness Constrained Distributionmentioning
confidence: 99%
“…Thus, when H(P s,y , Q s,y ) ≤ γs,y and sup (X,Y )∈X ×Y (h θ (X), Y ) ≤ M , given that is a non-negative loss by Section 2, we can see Equation (33), i.e., the expression in Thm. 3's statement, upper bounds Problem 1, i.e., provides a fairness certificate for Problem 1.…”
Section: S S=1mentioning
confidence: 99%
See 2 more Smart Citations