2022
DOI: 10.48550/arxiv.2202.09848
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Personalized Federated Learning with Exact Stochastic Gradient Descent

Abstract: In Federated Learning (FL), datasets across clients tend to be heterogeneous or personalized, and this poses challenges to the convergence of standard FL schemes that do not account for personalization. To address this, we present a new approach for personalized FL that achieves exact stochastic gradient descent (SGD) minimization. We start from the FedPer (Arivazhagan et al., 2019) neural network (NN) architecture for personalization, whereby the NN has two types of layers: the first ones are the common layer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
(20 reference statements)
0
1
0
Order By: Relevance
“…The datasets commonly used in personalized federated learning experiments include the benchmark datasets MNIST [20,25,38,39] and Fashion-MNIST [20,24,40,41].…”
Section: Datasetsmentioning
confidence: 99%
“…The datasets commonly used in personalized federated learning experiments include the benchmark datasets MNIST [20,25,38,39] and Fashion-MNIST [20,24,40,41].…”
Section: Datasetsmentioning
confidence: 99%