Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467281
|View full text |Cite
|
Sign up to set email alerts
|

Federated Adversarial Debiasing for Fair and Transferable Representations

Abstract: Federated learning is a distributed learning framework that is communication efficient and provides protection over participating users' raw training data. One outstanding challenge of federate learning comes from the users' heterogeneity, and learning from such data may yield biased and unfair models for minority groups. While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework. In this work, we study th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 23 publications
(29 reference statements)
0
12
0
Order By: Relevance
“…Real-world FL settings present many challenges due to non-iid data distributions, client heterogeneity, and limited compute, memory, and bandwidth at the edge (Kairouz et al 2019;Hong et al 2021). Heterogeneity in the compute capabilities of clients causes the so-called straggler problem, in which certain clients take "too long" to form model updates and the server must proceed without them.…”
Section: Related Work Federated Learningmentioning
confidence: 99%
“…Real-world FL settings present many challenges due to non-iid data distributions, client heterogeneity, and limited compute, memory, and bandwidth at the edge (Kairouz et al 2019;Hong et al 2021). Heterogeneity in the compute capabilities of clients causes the so-called straggler problem, in which certain clients take "too long" to form model updates and the server must proceed without them.…”
Section: Related Work Federated Learningmentioning
confidence: 99%
“…2) Heterogeneous data distributions, e.g., non-i.i.d. features, D k ∼ D i for ordered domain i, induces additional challenges with a skewed budget distribution, since training a single model on multiple domains (Li et al, 2020b) or models in a single domain (due to budget constraints) (Hong et al, 2021c;Dong et al, 2021) are known to be suboptimal on transfer performance.…”
Section: Problem Settingmentioning
confidence: 99%
“…The degradation could be worsened as facing data heterogeneity: The training datasets from participants are not independent and identically distributed (non-i.i.d.) (Li et al, 2020b;Fallah et al, 2020;Hong et al, 2021c;Zhu et al, 2021). When one device with a unique data distribution cannot afford training a large model, the global large model may not transfer to the unseen distribution (Pan & Yang, 2010).…”
Section: Introductionmentioning
confidence: 99%
“…Real-world FL settings present many challenges due to non-iid data distributions, client heterogeneity, and limited compute, memory, and bandwidth at the edge (Kairouz et al 2019;Hong et al 2021). Heterogeneity in the compute capabilities of clients causes the so-called straggler problem, in which certain clients take "too long" to form model updates and the server must proceed without them.…”
Section: Related Work Federated Learningmentioning
confidence: 99%