2022
DOI: 10.1002/wsbm.1548
|View full text |Cite
|
Sign up to set email alerts
|

Explainable deep learning in healthcare: A methodological survey from an attribution view

Abstract: The increasing availability of large collections of electronic health record (EHR) data and unprecedented technical advances in deep learning (DL) have sparked a surge of research interest in developing DL based clinical decision support systems for diagnosis, prognosis, and treatment. Despite the recognition of the value of deep learning in healthcare, impediments to further adoption in real healthcare settings remain due to the black-box nature of DL. Therefore, there is an emerging need for interpretable DL… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(14 citation statements)
references
References 127 publications
0
13
0
1
Order By: Relevance
“…Deep learning, being one of the unprecedented technical advances in healthcare research, assists clinicians in understanding the role of artificial intelligence in clinical decision making. Hence, deep learning could serve as a vehicle for the translation of modern biomedical data, including electronic health records, imaging, omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured, to bridge clinical research and human interpretability [ 101 , 102 ].…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning, being one of the unprecedented technical advances in healthcare research, assists clinicians in understanding the role of artificial intelligence in clinical decision making. Hence, deep learning could serve as a vehicle for the translation of modern biomedical data, including electronic health records, imaging, omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured, to bridge clinical research and human interpretability [ 101 , 102 ].…”
Section: Discussionmentioning
confidence: 99%
“…In local interpretation methods, XAI methods attempt to accurately describe individual sample predictions as the sum of feature effects; for example, LIME explains individual predictions by replacing a locally interpretable surrogate model for the complex model; Shapley values attempt to fairly assign the prediction to individual features. In contrast to local interpretation methods, global methods such as SHAP feature importance, coefficient of regression models, and permutationbased feature importance are frequently expressed as expected values based on the distribution of the data in order to investigate the knowledge encoded in the model and its effect on predictions [117]. Depending on the scope of the problem, clinicians may consider different levels of interpretability.…”
Section: Evolution Of Xai Methodsmentioning
confidence: 99%
“…Outlining juxtapleural nodules precisely is difficult since the contours may contain the lung walls, reducing the thresholding method's performance. Several people used morphological methods [17] such as the global thresholding method [18] or deep neural networks [19] to segment the lung. However, using the entire CT image as input, these methods need a lot of processing and aren't good enough to eliminate all lung wall cells.…”
Section: Pleural Surface Removalmentioning
confidence: 99%
“…On the other hand, deep learning models' interpretability and generalizability have been a significant barrier to their widespread adoption in healthcare settings. For instance, its performance is highly dependent on the designs and parameters of neural networks [16]; a model that performs well on one dataset may perform horribly on another one; there is no obvious rationale connecting the data about a case to the model's judgments [17]. Moreover, the interpretability is very critical for doctors to make clinical decisions.…”
Section: Introductionmentioning
confidence: 99%