2021
DOI: 10.1109/jbhi.2021.3078127
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Apnea-Hypopnea Index Using Deep Learning On 3-D Craniofacial Scans

Abstract: General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research.  You may not further distribute the material or use it for any profit-making activity or commer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 52 publications
0
10
0
1
Order By: Relevance
“…The scans were subsequently rendered and rotated in 45-degree increments to generate 2D images and depth maps that were fed into a convolutional neural network to predict the AHI values from the PSG. The mean absolute error of the proposed model was 11.38 events/hour, with an accuracy of 67% (39). The drawback of this prediction method is that it requires the facial acquisition of individual patients and needs to be calibrated and uploaded, which makes it difficult to collect patient data.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The scans were subsequently rendered and rotated in 45-degree increments to generate 2D images and depth maps that were fed into a convolutional neural network to predict the AHI values from the PSG. The mean absolute error of the proposed model was 11.38 events/hour, with an accuracy of 67% (39). The drawback of this prediction method is that it requires the facial acquisition of individual patients and needs to be calibrated and uploaded, which makes it difficult to collect patient data.…”
Section: Discussionmentioning
confidence: 99%
“…He et al ( 21 ) reported the best results thus far using frontal, 45-degree lateral, and 90-degree lateral images to predict OSA using ConvNet neural network models and could attain 91–95% sensitivity and 73–80% specificity. Hanif et al ( 39 ) detected predefined facial landmarks and aligned the scans in 3D space. The scans were subsequently rendered and rotated in 45-degree increments to generate 2D images and depth maps that were fed into a convolutional neural network to predict the AHI values from the PSG.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, a relevant study developed machine learning approaches for OSA severity classification based on scanned craniofacial feature images, but the accuracy of those approaches was 67% for predicting the risk of moderate-to-severe OSA. 57 Variations in obtained craniofacial images (due to breathing movements or muscle tone) may have affected the accuracy. Also, OSA presentation might not completely be caused by craniofacial factors.…”
Section: Discussionmentioning
confidence: 99%
“…The Stanford Technology Analytics and Genomics in Sleep (STAGES) study, described previously [ 38 , 39 ], collected data from patients across 11 different sleep clinics between 2018 and 2020. Briefly, all participants were patients who attended an appointment with a physician at a sleep clinic and completed an overnight polysomnography (PSG) study, in addition to completing the Alliance Sleep Questionnaire (ASQ) and providing a blood sample.…”
Section: Methodsmentioning
confidence: 99%