Abstract:The STOP-BANG questionnaire alone is insufficient to confirm the presence of significant sleep apnea. A maximal score of 8 did not have a high enough positive predictive value to forego confirmatory sleep testing.
“…Two OSA clinical prediction rules that have been widely used are the STOP-BANG (11) and MNC (6,10,13), likely because of their simplicity and ease of calculation. Consistent with prior studies on these tools, we found that they both indeed have high sensitivity, but suffer from low specificity (12,34,35), particularly the STOP-BANG questionnaire. Their 1LRs were less than 2 in all three of our patient groups and therefore use of these tools did not result in any substantial change in the probability of having OSA in any particular patient (36).…”
Rationale: More than a million polysomnograms (PSGs) are performed annually in the United States to diagnose obstructive sleep apnea (OSA). Third-party payers now advocate a home sleep test (HST), rather than an in-laboratory PSG, as the diagnostic study for OSA regardless of clinical probability, but the economic benefit of this approach is not known.
Objectives:We determined the diagnostic performance of OSA prediction tools including the newly developed OSUNet, based on an artificial neural network, and performed a cost-minimization analysis when the prediction tools are used to identify patients who should undergo HST.
Methods:The OSUNet was trained to predict the presence of OSA in a derivation group of patients who underwent an in-laboratory PSG (n = 383). Validation group 1 consisted of in-laboratory PSG patients (n = 149). The network was trained further in 33 patients who underwent HST and then was validated in a separate group of 100 HST patients (validation group 2). Likelihood ratios (LRs) were compared with two previously published prediction tools. The total costs from the use of the three prediction tools and the third-party approach within a clinical algorithm were compared.
Measurements and Main Results:The OSUNet had a higher 1LR in all groups compared with the STOP-BANG and the modified neck circumference (MNC) prediction tools. The 1LRs for STOP-BANG, MNC, and OSUNet in validation group 1 were 1.1 (1.0-1.2), 1.3 (1.1-1.5), and 2.1 (1.4-3.1); and in validation group 2 they were 1.4 (1.1-1.7), 1.7 (1.3-2.2), and 3.4 (1.8-6.1), respectively. With an OSA prevalence less than 52%, the use of all three clinical prediction tools resulted in cost savings compared with the third-party approach.
Conclusions:The routine requirement of an HST to diagnose OSA regardless of clinical probability is more costly compared with the use of OSA clinical prediction tools that identify patients who should undergo this procedure when OSA is expected to be present in less than half of the population. With OSA prevalence less than 40%, the OSUNet offers the greatest savings, which are substantial when the number of sleep studies done annually is considered.
“…Two OSA clinical prediction rules that have been widely used are the STOP-BANG (11) and MNC (6,10,13), likely because of their simplicity and ease of calculation. Consistent with prior studies on these tools, we found that they both indeed have high sensitivity, but suffer from low specificity (12,34,35), particularly the STOP-BANG questionnaire. Their 1LRs were less than 2 in all three of our patient groups and therefore use of these tools did not result in any substantial change in the probability of having OSA in any particular patient (36).…”
Rationale: More than a million polysomnograms (PSGs) are performed annually in the United States to diagnose obstructive sleep apnea (OSA). Third-party payers now advocate a home sleep test (HST), rather than an in-laboratory PSG, as the diagnostic study for OSA regardless of clinical probability, but the economic benefit of this approach is not known.
Objectives:We determined the diagnostic performance of OSA prediction tools including the newly developed OSUNet, based on an artificial neural network, and performed a cost-minimization analysis when the prediction tools are used to identify patients who should undergo HST.
Methods:The OSUNet was trained to predict the presence of OSA in a derivation group of patients who underwent an in-laboratory PSG (n = 383). Validation group 1 consisted of in-laboratory PSG patients (n = 149). The network was trained further in 33 patients who underwent HST and then was validated in a separate group of 100 HST patients (validation group 2). Likelihood ratios (LRs) were compared with two previously published prediction tools. The total costs from the use of the three prediction tools and the third-party approach within a clinical algorithm were compared.
Measurements and Main Results:The OSUNet had a higher 1LR in all groups compared with the STOP-BANG and the modified neck circumference (MNC) prediction tools. The 1LRs for STOP-BANG, MNC, and OSUNet in validation group 1 were 1.1 (1.0-1.2), 1.3 (1.1-1.5), and 2.1 (1.4-3.1); and in validation group 2 they were 1.4 (1.1-1.7), 1.7 (1.3-2.2), and 3.4 (1.8-6.1), respectively. With an OSA prevalence less than 52%, the use of all three clinical prediction tools resulted in cost savings compared with the third-party approach.
Conclusions:The routine requirement of an HST to diagnose OSA regardless of clinical probability is more costly compared with the use of OSA clinical prediction tools that identify patients who should undergo this procedure when OSA is expected to be present in less than half of the population. With OSA prevalence less than 40%, the OSUNet offers the greatest savings, which are substantial when the number of sleep studies done annually is considered.
“…Kunisaki et al (2014) did not find high specificity or PPV even at high STOP-BANG thresholds of ≥7 or 8. This could be due to their use of a different AHI cutoff of 15 or above to diagnose OSA.…”
Section: Discussioncontrasting
confidence: 55%
“…Another study done by Kunisaki et al (2014) looked at STOP-BANG Questionnaire performance in a Veterans Affairs unattended sleep study program used peripheral arterial tonometry (PAT) to diagnose OSA while our study used type III portable equipment (Collop et al 2011) that is more commonly used and has more studies are available validating its use in clinical practice (Collop et al 2011). Kunisaki et al (2014) did not find high specificity or PPV even at high STOP-BANG thresholds of ≥7 or 8.…”
“…In our study, best sensitivity and specificity were seen at 4 or 5 positive answers for moderate OSA, and 4 answers for severe OSA. In a study performed in army veterans [24], raising the STOP-Bang score from 3 to 5 led to slight decrease in sensitivity, increase in specificity and PPV in screening for moderate OSA. Cowan et al found that for AHI > 5 and AHI > 15 cut-off, the best overall accuracy and PPV was at STOP-Bang of 3 or 6 [40].…”
STOP-Bang showed good measurement properties, supporting its further use in OSA screening of commercial drivers. Int J Occup Med Environ Health 2016;30(5):751-761.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.