2020
DOI: 10.1007/978-3-030-58526-6_44
|View full text |Cite
|
Sign up to set email alerts
|

Fairness by Learning Orthogonal Disentangled Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2
1

Relationship

2
8

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 9 publications
0
16
0
Order By: Relevance
“…In application to face identification, disentanglement-like methods have been proposed in clustering human faces without latent code information that contain dominant features such as as skin and hair color 233 . Though applicable in settings with unobserved protected attributes, disentanglement can also be used in conjunction with adversarial learning to enforce orthogonality constraints to force independence of protected and non-protected latent codes, similarly applied to faces 234 . In combination with federated learning, frameworks such as FedDis has been demonstrated to isolate sensitive attributes in non-i.i.d.…”
Section: Fair Representation Learning Via Disentanglementmentioning
confidence: 99%
“…In application to face identification, disentanglement-like methods have been proposed in clustering human faces without latent code information that contain dominant features such as as skin and hair color 233 . Though applicable in settings with unobserved protected attributes, disentanglement can also be used in conjunction with adversarial learning to enforce orthogonality constraints to force independence of protected and non-protected latent codes, similarly applied to faces 234 . In combination with federated learning, frameworks such as FedDis has been demonstrated to isolate sensitive attributes in non-i.i.d.…”
Section: Fair Representation Learning Via Disentanglementmentioning
confidence: 99%
“…In this work, we propose to tackle domain shifts based on the assumption that the anatomical structure in brain MRI images is similar, whereas the intensity distribution is somehow different across multiple institutions. Inspired by recent works on disentangled representations [6,19,22,25,28], we propose a novel federated method, denoted Federated Disentanglement (FedDis) and argue its suitability for medical imaging. FedDis disentangles the parameter space to shape and appearance, and only shares the shape parameters with the other distributed clients to train a domain-agnostic global model (cf.…”
Section: Introductionmentioning
confidence: 99%
“…In another work, (Nam et al 2020) show that sample performance-based methods can be used to avoid the bias in the model. A disentanglement approach to obtain the bias invariant representation has been presented in (Sarhan et al 2020). In contrast to these other techniques, our work is focused on solving the drawback that we identify in adversarial learning framework for debiasing.…”
Section: Related Workmentioning
confidence: 99%