2020
DOI: 10.2196/15791
|View full text |Cite
|
Sign up to set email alerts
|

Improving Clinical Translation of Machine Learning Approaches Through Clinician-Tailored Visual Displays of Black Box Algorithms: Development and Validation

Abstract: Background Despite the promise of machine learning (ML) to inform individualized medical care, the clinical utility of ML in medicine has been limited by the minimal interpretability and black box nature of these algorithms. Objective The study aimed to demonstrate a general and simple framework for generating clinically relevant and interpretable visualizations of black box predictions to aid in the clinical translation of ML. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 51 publications
(33 reference statements)
0
15
0
Order By: Relevance
“…Although random forests are notable for their impressive predictive ability, they have minimal interpretability. This often limits their adoption in clinical practice because of the lack of ability to communicate how the predictions were generated ( 22 ). To provide an interpretable visual display as a summary of the RF-SLAM predictions, we created summary regression trees for the 1-day and 7-day predictions as simplified visualizations of the algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…Although random forests are notable for their impressive predictive ability, they have minimal interpretability. This often limits their adoption in clinical practice because of the lack of ability to communicate how the predictions were generated ( 22 ). To provide an interpretable visual display as a summary of the RF-SLAM predictions, we created summary regression trees for the 1-day and 7-day predictions as simplified visualizations of the algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…However, to date, most of the research literature on AI in health care deals with the development, application, and evaluation of advanced analytic techniques and models [ 10 - 12 ], primarily within computer science, engineering, and medical informatics. The literature on the implementation of AI to improve existing clinical workflows is more fragmented and mostly based on nonempirical data from proof-of-concept studies [ 1 , 13 ] across multiple subject areas, such as data governance [ 14 ], ethics [ 15 ], accountability [ 3 ], interpretability [ 16 ], and regulation [ 17 ]. This means that there are uncertainties around factors that influence the implementation of AI in real-world health care setups [ 10 ] and that health care professionals lack guidance on how to implement AI in their daily practices [ 18 ].…”
Section: Introductionmentioning
confidence: 99%
“…The pipeline comprises two stacked modules, each making predictions from a view of the patient's data: dynamic time-series and static features. The stacking of the pipeline's modules enables mimicking a clinician's approach to making prognostic decisions, by taking into account the interplay between the temporal Model interpretability has been extensively linked with clinical utility and trust in clinical risk prediction systems [56], where clinicians are generally reluctant to accept decisions guided by 'black-box' Machine Learning models [55], [67]. KD-OP enables the post-hoc interpretation of its predictions by extending the idea of using attention weights to interpret neural network outcomes [9].…”
Section: Discussionmentioning
confidence: 99%