2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2021
DOI: 10.1109/spawc51858.2021.9593126
|View full text |Cite
|
Sign up to set email alerts
|

Communication-Efficient and Personalized Federated Lottery Ticket Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Some researchers use the category information of the client's private dataset to transfer knowledge. For example, the FedDistill+ method [18] requires each client to calculate the average logit information of each category, and the server-side fuses this information to form the global logit information of each category and finally guides the client model training through this information. Chan et al proposed the FedHe method, which also aggregates the logit information of private data to achieve knowledge aggregation and then uses this knowledge to guide the model training of each client.…”
Section: Federated Knowledge Distillationmentioning
confidence: 99%
“…Some researchers use the category information of the client's private dataset to transfer knowledge. For example, the FedDistill+ method [18] requires each client to calculate the average logit information of each category, and the server-side fuses this information to form the global logit information of each category and finally guides the client model training through this information. Chan et al proposed the FedHe method, which also aggregates the logit information of private data to achieve knowledge aggregation and then uses this knowledge to guide the model training of each client.…”
Section: Federated Knowledge Distillationmentioning
confidence: 99%
“…Quantization [142][143][144][145]148,152,156,158,168,169,174,176,183,184,188,[191][192][193]198,199] Sparsification [140,141,151,153,155,165,174,186,200,202,204] Client Selection [147,166,172,185,191,198,207] Asynchronous [146,171,190,203,211] Two-Level Aggregation [164,175,180,182,185] Select Model Updates [149,157,…”
Section: Techniques Studies Referencedmentioning
confidence: 99%
“…The data-based optimization methods can leverage data generation techniques to generate data that makes the data among different participants closer to being identically distributed, thus alleviating the performance loss of the global model caused by data heterogeneity between the clients. FedDistill [35] is a data-free knowledge distillation method, where the participants share the average of label-based logit vectors. FedGAN [36] trains a generative adversarial network (GAN) to handle non-IID data challenges in an efficient communication manner, but inevitably introduces bias.…”
Section: Federated Learningmentioning
confidence: 99%
“…We combine the federated learning framework with the classical RGCN method and apply the knowledge distillation method to optimize the classical federated learning algorithm to address the problem of data heterogeneity among clients. Therefore, the baselines we apply include a detection model trained only locally (named Local), and federated learning methods FedAvg [25], FedProx [32], and FedDistill [35] that address the data heterogeneity problem. We evaluate the performance of these methods under different degrees of data heterogeneity.…”
Section: Datasetsmentioning
confidence: 99%