2021
DOI: 10.2147/nss.s306808
|View full text |Cite
|
Sign up to set email alerts
|

It is All in the Wrist: Wearable Sleep Staging in a Clinical Population versus Reference Polysomnography

Abstract: Purpose There is great interest in unobtrusive long-term sleep measurements using wearable devices based on reflective photoplethysmography (PPG). Unfortunately, consumer devices are not validated in patient populations and therefore not suitable for clinical use. Several sleep staging algorithms have been developed and validated based on ECG-signals. However, translation from these techniques to data derived by wearable PPG is not trivial, and requires the differences between sensing modalities t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(28 citation statements)
references
References 37 publications
0
24
2
Order By: Relevance
“…On the held out test set of 204 MESA patients, SleepPPG-Net scored a κ of 0.75 against 0.66 for BM-FE and 0.69 for BM-DTS approaches. SleepPPG-Net performance is also significantly (p < 0.001, two-sample t-test) higher than the current published SOTA results for sleep staging from PPG which stand at a κ of 0.66 [22,23], and significantly (p = 0.02, two-sample t-test) higher than the current SOTA results for sleepsleep staging from ECG which are reported at κ of 0.69 [20]. Figure 9 presents an example of the hypnograms generated by BM-FE, DB-DTS and SleepPPG-Net for a single patient.…”
Section: Discussioncontrasting
confidence: 76%
See 1 more Smart Citation
“…On the held out test set of 204 MESA patients, SleepPPG-Net scored a κ of 0.75 against 0.66 for BM-FE and 0.69 for BM-DTS approaches. SleepPPG-Net performance is also significantly (p < 0.001, two-sample t-test) higher than the current published SOTA results for sleep staging from PPG which stand at a κ of 0.66 [22,23], and significantly (p = 0.02, two-sample t-test) higher than the current SOTA results for sleepsleep staging from ECG which are reported at κ of 0.69 [20]. Figure 9 presents an example of the hypnograms generated by BM-FE, DB-DTS and SleepPPG-Net for a single patient.…”
Section: Discussioncontrasting
confidence: 76%
“…Most works that use PPG usually do so in the context of transfer learning (TL), where models are trained on a large database of heart rate variability (HRV) measures and then fine-tuned to a smaller database of pulse rate variability (PRV) measures derived from the IBIs detected on the PPG. These works report κ performance approaching 0.66 [22,23]. Sleep staging from the raw PPG is a relatively novel approach.…”
Section: Introductionmentioning
confidence: 99%
“…Since the EEG measurement setup is quite complex and requires expert assistance for both in-laboratory and home measurements, alternative signal sources for sleep staging, such as the electrocardiography (ECG) and body movements [13], [14], have been studied. In addition, automatic deep learningbased sleep staging from photoplethysmography (PPG) has been performed with promising results [15]- [17]. Especially the detection of REM sleep from PPG has been performed with an accuracy of 87% using deep learning [16].…”
Section: Introductionmentioning
confidence: 99%
“…All approaches that used both ACC and PPG were featurebased [14]- [20]. The best performing of these, i.e., Wulterkens et al, reported similar performance to that of the proposed approached: 𝜅 = 0.62 ± 0.12 [19], while all other approaches had substantially lower performance. One feature-based approach reported a performance increase from 𝜅 = 0.55 to 𝜅 = 0.65 by pretraining their classifier on features extracted from ECG [21].…”
Section: Benchmarkmentioning
confidence: 93%
“…Sleep is a dynamic process with a cyclic pattern that cycles through NREM and REM sleep with a period of approximately 90-110 minutes. The most recent and best performing sleep stage classification algorithms are temporal models that are either based on recurrent frameworks, e.g., long-short term memory (LSTM) [9], [19], [25] and gated recurrent units (GRU) [22], [26] or convolutional neural network (CNN) architectures, e.g., dilated convolutions [27] and the residual U-Net architecture [28]. While there is consensus that including contextual information from neighboring epochs increase performance, the segment size that these temporal models are trained on vary considerably between studies; from minutes [26], [28], to hours [22], and to the entire recording length [27].…”
Section: Introductionmentioning
confidence: 99%