2021
DOI: 10.1109/jiot.2020.3009358
|View full text |Cite
|
Sign up to set email alerts
|

Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

Abstract: Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
56
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(56 citation statements)
references
References 46 publications
0
56
0
Order By: Relevance
“…Since annotating large-scale data can be prohibitively expensive and time-consuming, USL and SSL can be useful for continually improving models in AIoT systems by harvesting the large-scale unlabeled data collected by massive numbers of sensors [196]. Besides, the multi-modal data from heterogeneous sensors (e.g., RGB/infrared/depth camera, IMU, Lidar, microphone) can be used to design cross modal-based pretext tasks (e.g., by leveraging audio-visual correspondence and ego-motion) and free semantic label-based pretext task (e.g., by leveraging depth estimation and semantic segmentation) for self-supervised learning [197].…”
Section: B Learningmentioning
confidence: 99%
“…Since annotating large-scale data can be prohibitively expensive and time-consuming, USL and SSL can be useful for continually improving models in AIoT systems by harvesting the large-scale unlabeled data collected by massive numbers of sensors [196]. Besides, the multi-modal data from heterogeneous sensors (e.g., RGB/infrared/depth camera, IMU, Lidar, microphone) can be used to design cross modal-based pretext tasks (e.g., by leveraging audio-visual correspondence and ego-motion) and free semantic label-based pretext task (e.g., by leveraging depth estimation and semantic segmentation) for self-supervised learning [197].…”
Section: B Learningmentioning
confidence: 99%
“…Federated learning allows to adapt an AI by retraining it at runtime (Ek et al 2020(Ek et al , 2021Saeed et al 2020;Konečnỳ et al 2016). Existing approaches do not tackle the aspects of AI software component deployment.…”
Section: Related Workmentioning
confidence: 99%
“…In the second part, a transfer learning approach from [13] was used for the supervised classification of affective states by using previously created features. Another study that is based on applying SSL techniques to the WESAD data set is presented in [14]. The paper uses a "pretext task" to train the model without using labeled data, where it must be determined whether the raw data and the wavelet transformations are temporally aligned.…”
Section: Introductionmentioning
confidence: 99%
“…The paper uses a "pretext task" to train the model without using labeled data, where it must be determined whether the raw data and the wavelet transformations are temporally aligned. The proposed model in [14] is evaluated in two ways: using a linear classifier on top of the SSL component and assessing the number of used samples for the supervised learning process. The first evaluation approach includes a direct comparison of the features created by SSL with the features designed with expert knowledge, as in [6].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation