2023
DOI: 10.1109/tmc.2021.3070013
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: This study develops a federated learning (FL) framework overcoming largely incremental communication costs due to model sizes in typical frameworks without compromising model performance. To this end, based on the idea of leveraging an unlabeled open dataset, we propose a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks. In DS-FL, the communication cost depends only on … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(53 citation statements)
references
References 19 publications
0
53
0
Order By: Relevance
“…The recently proposed Federated Distillation [7], [12], [11] (Figure 1, illustration on the right) takes an entirely different approach to communicating the knowledge obtained during the local training. Instead of communicating the parameterization of the locally trained model θ i to the server, in Federated Distillation the knowledge is communicated in the form of soft-label predictions on records of a public distillation data set X pub according to…”
Section: B Federated Distillationmentioning
confidence: 99%
See 3 more Smart Citations
“…The recently proposed Federated Distillation [7], [12], [11] (Figure 1, illustration on the right) takes an entirely different approach to communicating the knowledge obtained during the local training. Instead of communicating the parameterization of the locally trained model θ i to the server, in Federated Distillation the knowledge is communicated in the form of soft-label predictions on records of a public distillation data set X pub according to…”
Section: B Federated Distillationmentioning
confidence: 99%
“…As demonstrated in recent studies [7], [12], [11], Federated Distillation has several advantages over Federated Averaging: First, as model information is aggregated by means of distillation, Federated Distillation allows the participating clients to train different model architectures. This gives additional flexibility in settings where clients have heterogeneous hardware constraints.…”
Section: B Federated Distillationmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, some researchers [130,131] have leveraged semi-supervised and unsupervised learning in FL. In semi-supervised learning, it is supposed that very few label data are available, so a model is trained on both (available) labeled data and (public) unlabeled data.…”
Section: Supervised/unsupervised Trainingmentioning
confidence: 99%