2022
DOI: 10.48550/arxiv.2201.11380
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Achieving Personalized Federated Learning with Sparse Local Models

Abstract: Federated learning (FL) is vulnerable to heterogeneously distributed data, since a common global model in FL may not adapt to the heterogeneous data distribution of each user. To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user. However, PFL is far from its maturity, because existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory. In this work, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
(40 reference statements)
0
1
0
Order By: Relevance
“…Moreover, in Li et al (2020a), each client trains a personalized mask to maximize the performance only on the local data. A few recent works Bibikar et al (2022); Huang et al (2022); Qiu et al (2022); Li et al (2020a) also attempted to leverage sparse training within the FL setting as well. In particular Li et al (2020a) implemented randomly initialized sparse mask, FedDST Bibikar et al (2022) built on the idea of RigL Evci et al (2020) and mostly focussed on magnitude pruning on the server-side resulting in similar constraints and Ohib et al (2023) uses sparse gradients to efficiently train in a federated learning setting.…”
Section: Efficiency In Federated Learningmentioning
confidence: 99%
“…Moreover, in Li et al (2020a), each client trains a personalized mask to maximize the performance only on the local data. A few recent works Bibikar et al (2022); Huang et al (2022); Qiu et al (2022); Li et al (2020a) also attempted to leverage sparse training within the FL setting as well. In particular Li et al (2020a) implemented randomly initialized sparse mask, FedDST Bibikar et al (2022) built on the idea of RigL Evci et al (2020) and mostly focussed on magnitude pruning on the server-side resulting in similar constraints and Ohib et al (2023) uses sparse gradients to efficiently train in a federated learning setting.…”
Section: Efficiency In Federated Learningmentioning
confidence: 99%
“…To deal with heterogeneity in FL, numerous solutions have been proposed, such as the ones that constrain the direction of local model update to align the local and global optimization objectives (Li et al 2020;Karimireddy et al 2020;Acar et al 2021;Zhang et al 2022;Liu et al 2022). Personalized federated learning (PFL) (Smith et al 2017;Huang et al 2022b;Dai et al 2022) is a promising solution to addresses this challenge by jointly learning multiple personalized models (PMs), one for each client. For instance, references (Collins et al 2021;Liang et al 2020;Sun et al 2021)…”
Section: Introductionmentioning
confidence: 99%