2017
DOI: 10.1101/206540
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable machine learning predictions to help anesthesiologists prevent hypoxemia during surgery

Abstract: One Sentence Summary: We present a new machine learning based system called Prescience that provides interpretable real-time predictions to help anesthesiologists prevent hypoxemia during surgery.Abstract: Hypoxemia causes serious patient harm, and while anesthesiologists strive to avoid hypoxemia during surgery, anesthesiologists are not reliably able to predict which patients will have intraoperative hypoxemia. Using minute by minute EMR data from fifty thousand surgeries we developed and tested a machine le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
96
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 81 publications
(96 citation statements)
references
References 20 publications
0
96
0
Order By: Relevance
“…This suggests that model-agnostic, instance-level explanation approaches based on feature influence methods may be a viable approach to explaining model predictions in a way that is both comprehensible and useful to healthcare providers. Although other studies have utilized these approaches to explain predictive models in healthcare, [4,31,40] to the best of our knowledge this is the first study to verify that these explanations would be positively received by healthcare providers.…”
Section: Discussionmentioning
confidence: 87%
See 3 more Smart Citations
“…This suggests that model-agnostic, instance-level explanation approaches based on feature influence methods may be a viable approach to explaining model predictions in a way that is both comprehensible and useful to healthcare providers. Although other studies have utilized these approaches to explain predictive models in healthcare, [4,31,40] to the best of our knowledge this is the first study to verify that these explanations would be positively received by healthcare providers.…”
Section: Discussionmentioning
confidence: 87%
“… Interactive Risk representation Probability Critical care providers should be comfortable with the risk representation format. Risk information in feature influence explanations has been previously reported in terms of odds and probability, [ 31 , 32 ] but provider preferences on these representations are unknown. Odds Explanation display format Force plot Visual representations of risk information may facilitate comprehension of risk [ 33 ].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Explaining predictions from tree models is particularly important in medical applications, where the patterns a model uncovers can be more important than the model's prediction performance [15,16]. To demonstrate TreeExplainer's value, we use three medical datasets, which represent three types of loss functions: 1) Mortality, a dataset with 14,407 individuals and 79 features based on the NHANES I Epidemiologic Followup Study [17], where we model the risk of death over twenty years of followup.…”
Section: Tools For Interpreting Global Model Structure Based On Many mentioning
confidence: 99%