Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems 2021
DOI: 10.1145/3485730.3485929
|View full text |Cite
|
Sign up to set email alerts
|

FedMask

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…Federated Mask (FedMask) is an efficient framework in terms of processing and communication. When the Fed-Mask algorithm is implemented, each node can learn a sparse binary mask that is heterogeneous and structured ( Li et al, 2021 ). IoT devices can create a sparse model using this approach, resulting in lower computational costs and a smaller memory footprint.…”
Section: Resultsmentioning
confidence: 99%
“…Federated Mask (FedMask) is an efficient framework in terms of processing and communication. When the Fed-Mask algorithm is implemented, each node can learn a sparse binary mask that is heterogeneous and structured ( Li et al, 2021 ). IoT devices can create a sparse model using this approach, resulting in lower computational costs and a smaller memory footprint.…”
Section: Resultsmentioning
confidence: 99%
“…Thus, the communication cost is reduced from s a M p to s a K, communication trips are reduces from M p to K, and server only need to conduction sum operations for K − 1 times. However, some algorithms (Li et al, 2021a;Collins et al, 2021;Lin et al, 2020) need to communicate parameters that should be collected at the server but not averaged (Special Params.). For these parameters of size s e , devices wrap them into a message and send to the server.…”
Section: Hierarchical Aggregationmentioning
confidence: 99%
“…Many FL variant algorithms (Li et al, 2020;Karimireddy et al, 2020b;Wang et al, 2020;Acar et al, 2021;Luo et al, 2021;Li et al, 2021c;Chen & Chao, 2021;Collins et al, 2021) are developed to tackle the data heterogeneity problem where clients typically have different data distributions and/or various data sizes, making simple FL algorithms, like FedAvg, difficult to converge and leads to bad generalization performance (Woodworth et al, 2020;Acar et al, 2021). These algorithms may not be limited to exchanging model parameters during training, but possibly include other parameters like intermediate features (Collins et al, 2021), masks of model (Li et al, 2021a), auxiliary gradient corrections (Karimireddy et al, 2020b), third-party datasets (Lin et al, 2020;Tang et al, 2022), etc. Moreover, many FL algorithms require stateful clients to store some client state, like the control variates (Karimireddy et al, 2020b), old gradients (Acar et al, 2021), personalized models or layers (Liang et al, 2020;Chen & Chao, 2021), model masks (Li et al, 2021a) etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This discovery is particularly interesting for FL training, due to the lower communication overhead associated with exchanging binary masks in the UL and DL instead of float-bit representations of the weight updates. In [8], the authors introduce FedMask, a personalized Federated Learning (FL) algorithm based on pruning overparameterized random networks. FedMask is a deterministic algorithm that involves pruning a random network by optimizing personalized binary masks using Stochastic Gradient Descent (SGD), aiming to approximate the personalized target networks that fit the heterogeneous datasets found at the devices.…”
Section: Introductionmentioning
confidence: 99%