2022
DOI: 10.1007/s12028-022-01504-4
|View full text |Cite
|
Sign up to set email alerts
|

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care

Abstract: Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 44 publications
0
10
0
Order By: Relevance
“…Accurate risk prediction models have the potential to improve healthcare by directing timely interventions to patients who are most likely to benefit. However, prediction models that cannot scale adequately to large databases or cannot be interpreted and explained will struggle to gain acceptance in clinical practice [Moss et al, 2022]. The current study advances the oblique RSF, an accurate risk prediction model, towards being accurate, scalable, and interpretable.…”
Section: Implications Of Our Resultsmentioning
confidence: 98%
“…Accurate risk prediction models have the potential to improve healthcare by directing timely interventions to patients who are most likely to benefit. However, prediction models that cannot scale adequately to large databases or cannot be interpreted and explained will struggle to gain acceptance in clinical practice [Moss et al, 2022]. The current study advances the oblique RSF, an accurate risk prediction model, towards being accurate, scalable, and interpretable.…”
Section: Implications Of Our Resultsmentioning
confidence: 98%
“…This approach has been previously used to understand machine learning models employed in the context of neurocritical care. 34 The ten variables with the highest mean absolute SHAP values were considered the most influential features in this analysis. The SHAP Python library (version 0.44.0) facilitated the analysis of the most influential features by generating (1) a summary plot of mean absolute SHAP values, (2) scatter plots of SHAP values for each variable, (3) a heatmap of SHAP interaction values, and (4) scatter plots of important interactions.…”
Section: Methodsmentioning
confidence: 99%
“…While current ML-based classification systems yield good prediction accuracy, a significant hurdle to their broad application is the lack of attention given by researchers to the problem of model interpretability ( 61 , 68 ). In addition, considerable work is required to address the question of how effectively models can be perceived by humans.…”
Section: Model Trustworthy and Interpretabilitymentioning
confidence: 99%