2023
DOI: 10.1109/jiot.2023.3239945
|View full text |Cite
|
Sign up to set email alerts
|

Negative Selection by Clustering for Contrastive Learning in Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Another study introduced CSSHAR [19], which replaced SimCLR's backbone network with a custom transformer encoder to enhance feature representations extracted from unlabeled sensory data in HAR. ClusterCLHAR [21], following the SimCLR framework, proposed a novel contrastive learning approach for HAR by incorporating negative selection through clustering. The experimental results of ClusterCLHAR show competitive performance in both self-supervised and semi-supervised learning for HAR.…”
Section: Simclr For Human Activity Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…Another study introduced CSSHAR [19], which replaced SimCLR's backbone network with a custom transformer encoder to enhance feature representations extracted from unlabeled sensory data in HAR. ClusterCLHAR [21], following the SimCLR framework, proposed a novel contrastive learning approach for HAR by incorporating negative selection through clustering. The experimental results of ClusterCLHAR show competitive performance in both self-supervised and semi-supervised learning for HAR.…”
Section: Simclr For Human Activity Recognitionmentioning
confidence: 99%
“…The SimCLR framework [18] has proven to be a powerful transfer-based encoder for learning feature representations from unlabeled sensor data in HAR [19]. Several researchers have proposed novel models, including SimCLRHAR [20], CSSHAR [19], and ClusterCLHAR [21], which are based on the SimCLR framework. These models have demonstrated impressive accuracy in HAR tasks using sensor data from smartphones and smartwatches.…”
Section: Introductionmentioning
confidence: 99%
“…Kermiche [18] introduced a contrastive Hebbian feedforward learning scheme for Boltzmann machines, which could be employed to improve the training of deep neural network based on the estimation only based on feedforward computations, local contrastive Hebbian correlations, and local disturbances. Wang et al [19] integrated the clustering scheme into the contrastive learning framework for human activity recognition, which could select the same-cluster samples from negative pairs based on a newly defined contrastive loss function. Zhu et al [20] combined the reinforcement learning and contrastive learning together, and addressed a multi-instance reinforcement contrastive learning framework, in which a reinforcement learning-based agent was designed to assist the contrastive learning via better selection of the discriminative feature sets with inherent semantic relationships.…”
Section: B Contrastive Learningmentioning
confidence: 99%
“…However, it is challenging to find an augmentation approach that performs well across all sensor datasets in the HAR field [35]. Furthermore, the paper [52] found that augment-positives in contrastive loss (see (6)) classify each activity instance into a single class in the latent space, thereby decreasing the same-class instance aggregations. Therefore, to ensure the generalization of our model, we remove both the consistency loss and augmented-positives.…”
Section: Reasons For Two Reductionsmentioning
confidence: 99%