2021
DOI: 10.1016/j.mlwa.2021.100152
|View full text |Cite
|
Sign up to set email alerts
|

Sense and Learn: Self-supervision for omnipresent sensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(32 citation statements)
references
References 23 publications
0
32
0
Order By: Relevance
“…The objective is to solve multiple binary classification problems to detect if the sample has been transformed or is a clean example. The sense and learn framework [120] proposed a range of pretext tasks to teach models based on a wide variety of inputs, as in some cases utilising transformations without domain knowledge can lead to unintended biases. Specifically, the tasks of blend detection, odd segment recognition, modality denoising and feature prediction from a masked window provide an effective, scalable and inexpensive way to acquire pseudo labels from the signal itself.…”
Section: Pretext Taskmentioning
confidence: 99%
See 1 more Smart Citation
“…The objective is to solve multiple binary classification problems to detect if the sample has been transformed or is a clean example. The sense and learn framework [120] proposed a range of pretext tasks to teach models based on a wide variety of inputs, as in some cases utilising transformations without domain knowledge can lead to unintended biases. Specifically, the tasks of blend detection, odd segment recognition, modality denoising and feature prediction from a masked window provide an effective, scalable and inexpensive way to acquire pseudo labels from the signal itself.…”
Section: Pretext Taskmentioning
confidence: 99%
“…In the time-series realm, [120] used symmetric triple-based objectives for learning representations from various sensor input modalities. [67] applied contrastive predictive coding [105] for HAR.…”
Section: Multi-task Lossesmentioning
confidence: 99%
“…Contrastive learning methods mostly have been proposed for single modality data across a variety of applications including computer vision [10,12,32,37], audio processing [32,62,90], natural language processing [23,28], sensor data analytics [13,18,22,24,47,64,65,78]. Recently [19] has reviewed existing self-supervised learning approaches for multimodal and temporal data including contrastive learning models.…”
Section: Multimodal Contrastive Learningmentioning
confidence: 99%
“…In this regard, several recent self-supervised deep learning models are proposed to learn from two or three modalities together, such as audio-video [51,53], audio-text [40], vision-text [41,49,58], audio-vision-text [1,2,34]. Despite the undeniable need of considering data from many different sensor sources [7] and the difficulty of annotation of fast-generating time-series data in ubiquitous applications, applications of self-supervised contrastive learning in cross-modal time-series data is still not well-studied yet and they vastly focus on surrogate tasks [65], multimodal learning [22,36], or generative modeling [92].…”
Section: Introductionmentioning
confidence: 99%
“…Other self-supervised approaches in activity recognition include signal correspondence learning with the wavelet transform in a federated learning framework [59], utilizing eight auxiliary tasks for learning high-level features [60], contrastive learning for activity recognition in healthcare-based applications [66]. In most of the unsupervised and self-supervised approaches including Autoencoders, Multi-task self-supervision, Masked reconstruction, and CPC, the source and target datasets are typically the same for most experiments, albeit the pre-training does not use label information.…”
Section: Self-supervised Learning In Human Activity Recognitionmentioning
confidence: 99%