Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/537
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning with Attention-based Latent Signal Augmentation for Sleep Staging with Limited Labeled Data

Abstract: Infectious diseases have been recognized as major public health concerns for decades. Close contact discovery is playing an indispensable role in preventing epidemic transmission. In this light, we study the continuous exposure search problem: Given a collection of moving objects and a collection of moving queries, we continuously discover all objects that have been directly and indirectly exposed to at least one query over a period of time. Our problem targets a variety of applications, including but not lim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 2 publications
(3 reference statements)
0
8
0
Order By: Relevance
“…Also, TS-TCC proposed temporal and contextual contrasting approaches to learn instance-wise representations about the sleep EEG data [8]. In addition, SSLAPP developed a contrastive learning approach with attention-based augmentations in the embedding space to add more positive pairs [26]. Last, CoSleep [14] and SleepECL [27] are yet another two contrastive methods that exploit information, e.g., inter-epoch dependency and frequency domain views, from EEG data to obtain more positive pairs for contrastive learning.…”
Section: Self-supervised Learning For Sleep Stagingmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, TS-TCC proposed temporal and contextual contrasting approaches to learn instance-wise representations about the sleep EEG data [8]. In addition, SSLAPP developed a contrastive learning approach with attention-based augmentations in the embedding space to add more positive pairs [26]. Last, CoSleep [14] and SleepECL [27] are yet another two contrastive methods that exploit information, e.g., inter-epoch dependency and frequency domain views, from EEG data to obtain more positive pairs for contrastive learning.…”
Section: Self-supervised Learning For Sleep Stagingmentioning
confidence: 99%
“…We compare the performance of the adopted pretrained SSC models against state-of-the-art self-supervised methods proposed specifically for the sleep stage classification problem. the reported results of SleepDPC [25], CoSleep [14], and SS-LAPP [26] on Sleep-EDF dataset. We compare these methods against existing SSC models with the best-performing SSL method.…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…Also, TS-TCC proposed temporal and contextual contrasting approaches to learn instance-wise representations about the sleep EEG data [10]. In addition, SSLAPP developed a contrastive learning approach with attention-based augmentations in the embedding space to add more positive pairs [26]. Last, CoSleep [11] and SleepECL [27] are yet another two contrastive methods that exploit information, e.g., inter-epoch dependency and frequency domain views, from EEG data to obtain more positive pairs for contrastive learning.…”
Section: Self-supervised Learning For Sleep Stagingmentioning
confidence: 99%
“…We compare the performance of the adopted pretrained SSC models against state-of-the-art self-supervised methods proposed specifically for the sleep stage classification problem. reported results of SleepDPC [25] and SSLAPP [26] on Sleep-EDF dataset. To have a fair evaluation, we re-implemented these methods to be consistent with our experimental settings.…”
Section: Comparison With Baselinesmentioning
confidence: 99%
“…In particular, DPC was firstly used to train from scratch two encoders, one for the time view and the other for the frequency view; then, contrastive multiview [142] was used to refine the weights of the two encoders. Finally, Lee et al [140] presented SSLAPP, a completely different approach based on the combination of adversarial representation learning and pairwise representation learning, achieving performances superior to CoSleep, Sleep-DPC, and other fully supervised strategies.…”
Section: B Self-supervised Learning On Eegmentioning
confidence: 99%