2022
DOI: 10.1145/3550299
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the State of Self-Supervised Human Activity Recognition Using Wearables

Abstract: The emergence of self-supervised learning in the field of wearables-based human activity recognition (HAR) has opened up opportunities to tackle the most pressing challenges in the field, namely to exploit unlabeled data to derive reliable recognition systems for scenarios where only small amounts of labeled training samples can be collected. As such, self-supervision, i.e., the paradigm of 'pretrain-then-finetune' has the potential to become a strong alternative to the predominant end-to-end training approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
40
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(52 citation statements)
references
References 42 publications
1
40
0
Order By: Relevance
“…Such design requires domain expertise and results in representations that are useful for downstream applications. This modeling paradigm of 'pretrain-thenfinetune' has resulted in significant performance boosts for human activity recognition from wearables [19], [27], [30].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Such design requires domain expertise and results in representations that are useful for downstream applications. This modeling paradigm of 'pretrain-thenfinetune' has resulted in significant performance boosts for human activity recognition from wearables [19], [27], [30].…”
Section: Related Workmentioning
confidence: 99%
“…Another contrastive learning method includes SimCLR [24], [26], [27], which heavily relies on data transformations [32] as it contrasts two randomly augmented views of the data in a siamese setup, resulting in highly effective representations. SimSiam [27], [33] and Bootstrap Your Own Latent (BYOL) [27], [34] are also siamese methods, albeit do not contrast the two views. For multi-sensor setups, ColloSSL [35] and COCOA [36] contrast across different sensor locations for training.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations