2021
DOI: 10.1088/1361-6560/abcd17
|View full text |Cite
|
Sign up to set email alerts
|

Interpretation and visualization techniques for deep learning models in medical imaging

Abstract: Deep learning approaches to medical image analysis tasks have recently become popular; however, they suffer from a lack of human interpretability critical for both increasing understanding of the methods’ operation and enabling clinical translation. This review summarizes currently available methods for performing image model interpretation and critically evaluates published uses of these methods for medical imaging applications. We divide model interpretation in two categories: (1) understanding model structu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 80 publications
(52 citation statements)
references
References 110 publications
(155 reference statements)
0
49
0
Order By: Relevance
“…With this in mind, regardless of the training accuracy scores attained, further analysis is needed to explain why decisions are made and predictions are given. We analyse several synthetic images through Gradient-weighted Class Activation Mapping (Grad-CAM) [23,48]. Class activation maps are produced by the convolutional neural network trained only on real images when given synthetic data as input.…”
Section: Classification Model Analysis and Pruningmentioning
confidence: 99%
“…With this in mind, regardless of the training accuracy scores attained, further analysis is needed to explain why decisions are made and predictions are given. We analyse several synthetic images through Gradient-weighted Class Activation Mapping (Grad-CAM) [23,48]. Class activation maps are produced by the convolutional neural network trained only on real images when given synthetic data as input.…”
Section: Classification Model Analysis and Pruningmentioning
confidence: 99%
“…For visualisation and interpretation, saliency maps were generated by the process of occlusion (Zeiler and Fergus, 2014;Huff et al, 2021). Specifically, hexagonal patches of the input cortical metrics were first replaced with uniform values; then these occluded images were passed through the trained models.…”
Section: Visualisationmentioning
confidence: 99%
“…including more variables than lung V20, will need methods for interpreting the predictions to overcome the perception of models as a 'black box'. Interpretability has been an issue for complex machine learning models especially with the use of deep learning networks for image classification [61,62], however there are techniques to visualise the focal points of models allowing users to review the factors involved in decision making [63]. Increased data availability and variation is expected to improve the development of outcome models and help to assess both changes in practice and outcome.…”
Section: Predicting Outcomes Following Radiation Therapymentioning
confidence: 99%