2021
DOI: 10.48550/arxiv.2109.02351
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

F3: Fair and Federated Face Attribute Classification with Heterogeneous Data

Abstract: We consider the problem of achieving fair classification in Federated Learning (FL) under data heterogeneity. Most of the approaches proposed for fair classification require diverse data that represent the different demographic groups involved. In contrast, it is common for each client to own data that represents only a single demographic group. Hence the existing approaches cannot be adopted for fair classification models at the client level. To resolve this challenge, we propose several aggregation technique… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Abay et al [1] analyzes multiple causes of unfairness in federated learning and potential solutions. Other papers that consider notions related to egalitarian fairness include [20,21,15,9].…”
Section: Fairness Definitions and Ethical Considerationsmentioning
confidence: 99%
“…Abay et al [1] analyzes multiple causes of unfairness in federated learning and potential solutions. Other papers that consider notions related to egalitarian fairness include [20,21,15,9].…”
Section: Fairness Definitions and Ethical Considerationsmentioning
confidence: 99%
“…each client represents a bank with data about female as well as male customers) Mohri et al [2019], Abay et al [2020], Du et al [2021], Cui et al [2021] and/or by sacrificing some privacy for fairness. Recently proposed methods (Kanaparthy et al [2021], Papadaki et al [2021], Yue et al [2021]) send unprotected model parameters and/or other unprotected information such as fairness metrics to a central aggregator, thereby leaking information about clients' data Boenisch et al [2021]. FairFed Ezzeldin et al [2021] employs techniques to protect the individual updates but the aggregations at the central aggregator are not protected and can reveal information about the clients' data when under attack.…”
Section: Introductionmentioning
confidence: 99%
“…For example, during FL training, assuming that an adversary A has the model from the previous round and the gradient updates from the current round, A can infer a private training exampleKairouz et al [2021]. Current works do not take into account such information leaks in FLPapadaki et al [2021],Yue et al [2021],Kanaparthy et al [2021]. A can also analyze the aggregated outputs to infer knowledge about a particular client.…”
mentioning
confidence: 99%