2022
DOI: 10.1007/s00779-022-01688-8
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised and personalized federated activity recognition based on active learning and label propagation

Abstract: One of the major open problems in sensor-based Human Activity Recognition (HAR) is the scarcity of labeled data. Among the many solutions to address this challenge, semi-supervised learning approaches represent a promising direction. However, their centralized architecture incurs in the scalability and privacy problems that arise when the process involves a large number of users. Federated learning (FL) is a promising paradigm to address these problems. However, the FL methods that have been proposed for HAR a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 66 publications
(94 reference statements)
0
25
0
Order By: Relevance
“…FedSem+, which is based on FedSem [Albaseer et al, 2020], computes all pseudo-labels only once during training, but it additionally uses entropy-based sample weights, as we found this to consistently improve performance. FedAvg+PL(localLP), based on FedHar [Presotto et al, 2022], uses per-client label propagation, otherwise it is identical to Algorithm 3. Additionally, we report results for training in a supervised manner on only the available labeled examples, FedAvg (labeled only).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…FedSem+, which is based on FedSem [Albaseer et al, 2020], computes all pseudo-labels only once during training, but it additionally uses entropy-based sample weights, as we found this to consistently improve performance. FedAvg+PL(localLP), based on FedHar [Presotto et al, 2022], uses per-client label propagation, otherwise it is identical to Algorithm 3. Additionally, we report results for training in a supervised manner on only the available labeled examples, FedAvg (labeled only).…”
Section: Methodsmentioning
confidence: 99%
“…In this setting Zhang et al [2021] combine local consistency with grouping of client updates to reduce gradient diversity while Diao et al [2022] combine consistency, through strong data augmentation, with pseudo-labeling unlabeled client data. Other methods focus exclusively on pseudo-labeling: Albaseer et al [2020] and Lin et al [2021] both use network predictions to assign pseudo labels, while Presotto et al [2022] develop a specialized method for human activity recognition which uses label propagation locally on each client to pseudo label incoming data. Unlike in classical SSL, none of the above methods fully make use of the knowledge gained from estimating the data manifold because they exploit interactions between data points at most locally within each client.…”
Section: Related Workmentioning
confidence: 99%
“…1. In personalized federated learning, users collaborate to develop personalized models instead of generating a global model [33], [38], [51]- [56]. However, these works primarily focus on image classification benchmarks such as CIFAR10 [57], and CIFAR100 [58], and applications such as multi-sensor and IoT classifications are left less unattended.…”
Section: Central Servermentioning
confidence: 99%
“…In semisupervised learning, we have a small amount of labeled data and a large amount of unlabeled data. There have been multiple attempts to unify federated learning with semi-supervised learning [12], [33], [38], [9]. Zhao et al consider users have a large amount of unlabeled data, while the server has a set of labeled data [12].…”
Section: Central Servermentioning
confidence: 99%
See 1 more Smart Citation