2023
DOI: 10.1109/tnsre.2023.3245285
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning for Label- Efficient Sleep Stage Classification: A Comprehensive Evaluation

Abstract: The past few years have witnessed a remarkable advance in deep learning for EEG-based sleep stage classification (SSC). However, the success of these models is attributed to possessing a massive amount of labeled data for training, limiting their applicability in real-world scenarios. In such scenarios, sleep labs can generate a massive amount of data, but labeling can be expensive and time-consuming. Recently, the self-supervised learning (SSL) paradigm has emerged as one of the most successful techniques to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…In summary, self-supervised learning is a more structured form of unsupervised learning in which the data itself is employed to generate supervisory signals, whereas unsupervised learning is a broader category encompassing various learning methods that do not rely on external labels or targets. 22 In recent years, self-supervised learning has made significant progress in various fields, such as computer vision, natural language processing, and speech recognition. 23 Contrastive learning is an important method in self-supervised learning that learns feature representations by comparing the similarity between samples.…”
Section: Self-supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In summary, self-supervised learning is a more structured form of unsupervised learning in which the data itself is employed to generate supervisory signals, whereas unsupervised learning is a broader category encompassing various learning methods that do not rely on external labels or targets. 22 In recent years, self-supervised learning has made significant progress in various fields, such as computer vision, natural language processing, and speech recognition. 23 Contrastive learning is an important method in self-supervised learning that learns feature representations by comparing the similarity between samples.…”
Section: Self-supervised Learningmentioning
confidence: 99%
“…Therefore, self-supervised learning can be considered a subset of unsupervised learning as it also involves the use of unlabeled data. In summary, self-supervised learning is a more structured form of unsupervised learning in which the data itself is employed to generate supervisory signals, whereas unsupervised learning is a broader category encompassing various learning methods that do not rely on external labels or targets 22 . In recent years, self-supervised learning has made significant progress in various fields, such as computer vision, natural language processing, and speech recognition 23 …”
Section: Relation Workmentioning
confidence: 99%
“…TS-SD (Shi et al, 2021) trains a model using triplet similarity discrimination task, where the goal is to identify which of two TS is more similar to a given TS, using DTW to define similarity. TS-TCC (Eldele et al, 2021) proposes a temporal contrastive loss by making the augmentations predict each other's future, and CA-TCC (Eldele et al, 2023), which is the extension of TS-TCC to the semi-supervised setting, adopts the same loss. TS2Vec (Yue et al, 2022) splits TS into two subseries and defines hierarchical contrastive loss in both instance and temporal dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…We conduct experiments on semi-supervised classification tasks by adopting SoftCLT to TS-TCC (Eldele et al, 2021) and its extension CA-TCC (Eldele et al, 2023), which are the methods that incorporate CL into self-and semi-supervised learning, respectively. As baseline methods, we consider SSL-ECG (Sarkar & Etemad, 2020), CPC (Oord et al, 2018), SimCLR (Chen et al, 2020) and TS-TCC (Eldele et al, 2021) for self-supervised learning, and Mean-Teacher (Tarvainen & Valpola, 2017), DivideMix , SemiTime (Fan et al, 2021), FixMatch (Sohn et al, 2020) and CA-TCC (Eldele et al, 2023) for semi-supervised learning. Note that both TS-TCC and CA-TCC perform instance-wise and temporal contrasting, however, their temporal contrasting is achieved by predicting one view's future from another, which is different from the conventional contrastive loss with positive and negative pairs.…”
Section: Semi-supervised Classificationmentioning
confidence: 99%
See 1 more Smart Citation