2022
DOI: 10.48550/arxiv.2211.03282
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Performance and utility trade-off in interpretable sleep staging

Abstract: Recent advances in deep learning have led to the development of models approaching the human level of accuracy. However, healthcare remains an area lacking in widespread adoption. The safety-critical nature of healthcare results in a natural reticence to put these black-box deep learning models into practice. This paper explores interpretable methods for a clinical decision support system called sleep staging, an essential step in diagnosing sleep disorders. Clinical sleep staging is an arduous process requiri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…For this reason, the accuracies of the predicted infection risk may seem low compared to predictive models for other disease domains. For example, other clinical domains like sleep staging [ 28 ] or epilepsy [ 59 ] have shown extremely high accuracy with similar methodologies. However, this is because such models employ much larger sample sizes with many more granular features than were included in the present study’s rare pediatric leukemia dataset.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…For this reason, the accuracies of the predicted infection risk may seem low compared to predictive models for other disease domains. For example, other clinical domains like sleep staging [ 28 ] or epilepsy [ 59 ] have shown extremely high accuracy with similar methodologies. However, this is because such models employ much larger sample sizes with many more granular features than were included in the present study’s rare pediatric leukemia dataset.…”
Section: Resultsmentioning
confidence: 99%
“…Interpretable and/or explainable methods make it easier to see why the model is making a particular prediction. It is possible that less interpretable black box methods might make better predictions [ 28 ]. However, black box methods that employ large neural networks need very large sample sizes, often more than 10,000 patients [ 59 , 78 ].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations