2023
DOI: 10.48550/arxiv.2301.09357
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerating Fair Federated Learning: Adaptive Federated Adam

Abstract: Federated learning is a distributed and privacy-preserving approach to train a statistical model collaboratively from decentralized data of different parties. However, when datasets of participants are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants, and model performance across participants is non-uniform. This is known as the fairness problem in federated learning. In this paper, we formulate fairness-controlled fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
(28 reference statements)
0
1
0
Order By: Relevance
“…A common concern in federated learning is that statistical heterogeneity across local data could lead to performance issues, including convergence problems and fairness issues [60, 61, 62]. However, we argue that federated learning remains advantageous for MoA predictions for two key reasons.…”
Section: Discussionmentioning
confidence: 94%
“…A common concern in federated learning is that statistical heterogeneity across local data could lead to performance issues, including convergence problems and fairness issues [60, 61, 62]. However, we argue that federated learning remains advantageous for MoA predictions for two key reasons.…”
Section: Discussionmentioning
confidence: 94%
“…FL for Large Language Models (LLMs): Integrating LLMs within the FL framework represents an enticing research area that has recently gained significant attention (Ezzeldin et al 2022;Yu, Muñoz, and Jannesari 2023;Ju et al 2023;Fan et al 2023). The FL approach to LLM training proves advantageous by leveraging diverse data sources (Yu, Muñoz, and Jannesari 2023), supporting optimization tasks like fine-tuning, prompt tuning, and pre-training.…”
Section: Future Opportunitiesmentioning
confidence: 99%