2022
DOI: 10.48550/arxiv.2208.06991
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Interpretable Sleep Stage Classification Using Cross-Modal Transformers

Abstract: Accurate sleep stage classification is significant for sleep health assessment. In recent years, several deep learning and machine learning based sleep staging algorithms have been developed and they have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their Black-box behavior, which which have limited their use in clinical settings. Here, we propose Cross-Modal Transformers, which is a transformer-based method for sleep st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(20 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…All automatic sleep-staging algorithms, including the most advanced deep learning algorithms, show that the sleep-staging results of the Wake and REM stages in the 5-stage classification are the most accurate, almost reaching clinical levels. However, the accuracy of the N1 stage is much lower than the average accuracy of other stages [21]. Most research results show that the average accuracy of the N1 stage is only about 40% [22], which largely hinders the pace of automatic sleep staging replacing manual sleep staging.…”
Section: Resultsmentioning
confidence: 97%
“…All automatic sleep-staging algorithms, including the most advanced deep learning algorithms, show that the sleep-staging results of the Wake and REM stages in the 5-stage classification are the most accurate, almost reaching clinical levels. However, the accuracy of the N1 stage is much lower than the average accuracy of other stages [21]. Most research results show that the average accuracy of the N1 stage is only about 40% [22], which largely hinders the pace of automatic sleep staging replacing manual sleep staging.…”
Section: Resultsmentioning
confidence: 97%
“…However, the high parameter counts of 3.7 M and the long input sequences hinder deployment and real-time inference on mobile devices. Compared to two other relatively lightweight one-to-one CNN Transformer models (Pradeepkumar et al, 2022;Yao and Liu, 2022), Micro SleepNet achieves better staging performance with an order of magnitude fewer parameters. This result indicates that introducing Transformer structures does not significantly improve CNN model performance.…”
Section: Comparison With Baselinesmentioning
confidence: 98%
“…Then, multiple FC layers for sleep staging are utilized. Another transformer-based model is composed of a multi-Scale CNN block with intra-modal and cross-modal attention [167]. The main difference (besides the model architecture) between this work and ref.…”
Section: Transformer-based Modelsmentioning
confidence: 99%
“…The authors of [167] proposed a cross-modal transformer, which enables them to use the attention to learn and interpret: (1) intra-modal relationships, (2) cross-modal relationships, and (3) inter-epoch relationships. The intra-modal relations are similar to [166], with a difference, where they introduce the CLS c token, as discussed in Section 4.3.…”
Section: Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation