2023 IEEE International Conference on Pervasive Computing and Communications (PerCom) 2023
DOI: 10.1109/percom56429.2023.10099197
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Enhancements to Contrastive Predictive Coding for Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(14 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…As such, self‐supervision, that is, the paradigm of “pretrain‐then‐finetune” has the potential to become a strong alternative to the previously dominant end‐to‐end training approaches. Recently a number of contributions have been made that introduced self‐supervised learning into the field of HAR, including benchmarks of leading techniques (Haresamudram, Essa, and Plötz 2022) and novel computational methods (Haresamudram, Essa, and Plötz 2023).…”
Section: Personalized Longitudinal Interactionmentioning
confidence: 99%
“…As such, self‐supervision, that is, the paradigm of “pretrain‐then‐finetune” has the potential to become a strong alternative to the previously dominant end‐to‐end training approaches. Recently a number of contributions have been made that introduced self‐supervised learning into the field of HAR, including benchmarks of leading techniques (Haresamudram, Essa, and Plötz 2022) and novel computational methods (Haresamudram, Essa, and Plötz 2023).…”
Section: Personalized Longitudinal Interactionmentioning
confidence: 99%
“…SimSiam employs a pair of mirror-image networks and a predictor network at the end of a node [62][63][64]. The loss function employs an asymmetric stop gradient to optimize the pairwise alignments between positive pairs because the two branches have identical weights.…”
Section: Non-contrastive Ssl Modelsmentioning
confidence: 99%
“…[78] examines the pre-training data efficiency, i.e., the minimal quantities of pre-training data required for effective wearablebased self-supervised learning. Enhancements to wearable-based CPC were investigated in [30], by considering three components: the encoder architecture, the autoregressive network, and the future timestep prediction task. The resulting 'Enhanced CPC' demonstrates substantial improvements over the original framework [68] as well as outperforms state-of-the-art self-supervision on four of six target datasets.…”
Section: Self-supervised Representation Learning For Human Activity R...mentioning
confidence: 99%
“…Following the self-supervised learning paradigm, our approach contains two stages: (i) pre-training, where the network learns to map unlabeled data to a codebook of vectors, resulting in the discrete representations; and (ii) fine-tuning/classification, which utilizes the discrete representations as input for recognizing activities. In order to enable the mapping, we apply vector quantization (VQ) to the Enhanced CPC framework [30]. Therefore, the base of the discretization process is self-supervision, where the loss from the pretext task is added to the loss from the VQ module in order to update the network parameters as well as the codebook vectors.…”
Section: Input Window Discretizationmentioning
confidence: 99%
See 1 more Smart Citation