2022
DOI: 10.1016/j.compbiomed.2021.105111
|View full text |Cite
|
Sign up to set email alerts
|

Transparency of deep neural networks for medical image analysis: A review of interpretability methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
106
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 210 publications
(107 citation statements)
references
References 63 publications
0
106
0
1
Order By: Relevance
“…However, one limitation of this model is that it does not satisfactorily explain its decisions. As deep learning models have been increasingly applied to medical image analysis, there is an evolving interest in the interpretability of these models ( Salahuddin et al, 2022 ; Lipton, 2017 ; Zech et al, 2018 ; Ghassemi et al, 2021 ). While an exhaustive interpretation of deep learning QC models is beyond the scope of this work, we provided a preliminary qualitative interpretation of the CNN-i model (Figure 6) that demonstrates the intuitive nature of its decisions.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, one limitation of this model is that it does not satisfactorily explain its decisions. As deep learning models have been increasingly applied to medical image analysis, there is an evolving interest in the interpretability of these models ( Salahuddin et al, 2022 ; Lipton, 2017 ; Zech et al, 2018 ; Ghassemi et al, 2021 ). While an exhaustive interpretation of deep learning QC models is beyond the scope of this work, we provided a preliminary qualitative interpretation of the CNN-i model (Figure 6) that demonstrates the intuitive nature of its decisions.…”
Section: Discussionmentioning
confidence: 99%
“…Here, we expand the hybrid-QC approach to a large multi-site dMRI dataset. Moreover, one of the common critiques of deep learning is that it can learn irrelevant features of the data and does not provide information that is transparent enough to interpret ( Lipton, 2017 ; Salahuddin et al, 2022 ; Zech et al, 2018 ). To confirm that the hybrid-QC deep learning algorithm uses meaningful features of the diffusion-weighted images (DWI) to perform accurate QC, we used machine learning interpretation methods that pry open the “black box” of the neural network, thereby highlighting the features that lead to a specific QC score ( Sundararajan et al, 2017 ; Murdoch et al, 2019 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the projection of original samples in latent spaces with regularized dimensions for different attributes (see Figure 7) could be used as an interpretable attribute selection, identifying the ones better separating the analyzed classes such as the maximum 2D diameter of the myocardium and the LV volume attributes in our experiments. Further work will focus on fully integrating advanced feature selection techniques with the Attri-VAE model, as well as exploring alternative interpretability methods (see the recent review of Salahuddin et al (Salahuddin et al, 2022)) to better understand the role of clinical and imaging attributes on medical decisions in cardiovascular applications.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, Salahuddin et al [107] review a set of interpretability methods which are grouped into nine different categories based on the type of explanations generated. They also discuss the problem of evaluating explanations and describe a set of evaluation strategies adopted to quantitative and qualitative measure the explanations' quality.…”
Section: General Reviews Tjoa and Guanmentioning
confidence: 99%