2022
DOI: 10.1007/s40747-022-00894-4
|View full text |Cite
|
Sign up to set email alerts
|

Semi-HFL: semi-supervised federated learning for heterogeneous devices

Abstract: In the vanilla federated learning (FL) framework, the central server distributes a globally unified model to each client and uses labeled samples for training. However, in most cases, clients are equipped with different devices and are exposed to a variety of situations. There are great differences between clients in storage, computing, communication, and other resources, which makes unified deep models used in traditional FL cannot fit clients’ personalized resource conditions. Furthermore, a great deal of la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…To tackle the problem of lack of labels in clients, semi-supervised learning is integrated to the FL framework to leverage the large amount of unlabeled data. Zhong et al proposed a semi-supervised federated learning method for heterogeneous devices, which adopts pseudo-labeling method as its semi-supervised learning strategy [32]. Specifically, it assumes that the central server has a small amount of labeled data to train an initial global model, which is then used to obtain pseudo-labels for local unlabeled samples that meet certain entropy conditions (i.e., below a selected entropy threshold).…”
Section: Related Workmentioning
confidence: 99%
“…To tackle the problem of lack of labels in clients, semi-supervised learning is integrated to the FL framework to leverage the large amount of unlabeled data. Zhong et al proposed a semi-supervised federated learning method for heterogeneous devices, which adopts pseudo-labeling method as its semi-supervised learning strategy [32]. Specifically, it assumes that the central server has a small amount of labeled data to train an initial global model, which is then used to obtain pseudo-labels for local unlabeled samples that meet certain entropy conditions (i.e., below a selected entropy threshold).…”
Section: Related Workmentioning
confidence: 99%