2015
DOI: 10.1378/chest.14-0929
|View full text |Cite
|
Sign up to set email alerts
|

Misclassification of OSA Severity With Automated Scoring of Home Sleep Recordings

Abstract: BACKGROUND: Th e advent of home sleep testing has allowed for the development of an ambulatory care model for OSA that most health-care providers can easily deploy. Although automated algorithms that accompany home sleep monitors can identify and classify disordered breathing events, it is unclear whether manual scoring followed by expert review of home sleep recordings is of any value. Th us, this study examined the agreement between automated and manual scoring of home sleep recordings.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
23
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 30 publications
(27 reference statements)
3
23
1
Order By: Relevance
“… 10 21 Our results show that RDI MAN agreed slightly better than RDI RAW with the laboratory AHI but, again, the differences were small compared with the confidence limits. Our Bland-Altman plots did not show the large differences between the manually scored and the computer-scored RDI for the ApneaLink Plus that were reported in a recent paper by Aurora et al 22 Unlike the 2014 paper from Masa et al , 23 we did not see much difference between the receiver–operator characteristic curves for mild, moderate and severe sleep apnoea when comparing the laboratory polysomnography with home testing performed on a different night. However, they had studied a much larger group of participants than we did and they used the ApneaLink, which records only nasal airflow, rather than the four-channel ApneaLink Plus.…”
Section: Discussioncontrasting
confidence: 83%
“… 10 21 Our results show that RDI MAN agreed slightly better than RDI RAW with the laboratory AHI but, again, the differences were small compared with the confidence limits. Our Bland-Altman plots did not show the large differences between the manually scored and the computer-scored RDI for the ApneaLink Plus that were reported in a recent paper by Aurora et al 22 Unlike the 2014 paper from Masa et al , 23 we did not see much difference between the receiver–operator characteristic curves for mild, moderate and severe sleep apnoea when comparing the laboratory polysomnography with home testing performed on a different night. However, they had studied a much larger group of participants than we did and they used the ApneaLink, which records only nasal airflow, rather than the four-channel ApneaLink Plus.…”
Section: Discussioncontrasting
confidence: 83%
“…Two prior studies examined the agreement between automatic and manual scoring of respiratory events in HSTs. 9,24 The two studies suggest that the agreement between automatic and manual agreement is modest and that automatic agreement consistently underestimated the AHI derived from manual scoring of HSTs. None of these prior publications reported the agreement when different scoring software is used on the same studies, which is a strength of our current study given that this would be the real-world situation in collaborative research involving international sleep centers.…”
mentioning
confidence: 99%
“…From the patient standpoint, new non-contact sensing technologies such as pressure-sensitive LCs have the potential to make sleep testing more comfortable and seamless. On the other hand, automated scoring algorithms embedded in the diagnostic device have allowed fast diagnosis of OSA [5]. In this section we provide a brief review of research works that relate to the system and methods we present in this paper.…”
Section: Related Workmentioning
confidence: 99%