2020
DOI: 10.48550/arxiv.2008.02766
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…These methods arose in computer vision and have demonstrated empirical utility in producing nonlinear factor models where the factors are conceptually sensible. Yet, due to the black-box nature of deep learning, explanations for how the factors are generated from the data, using local saliency maps for instance, are unreliable or imprecise (Laugel et al, 2019;Slack et al, 2020;Arun et al, 2020). In imaging applications, where the features are raw pixels, this type of interpretability is unnecessary.…”
Section: Disentangled Autoencodersmentioning
confidence: 99%
“…These methods arose in computer vision and have demonstrated empirical utility in producing nonlinear factor models where the factors are conceptually sensible. Yet, due to the black-box nature of deep learning, explanations for how the factors are generated from the data, using local saliency maps for instance, are unreliable or imprecise (Laugel et al, 2019;Slack et al, 2020;Arun et al, 2020). In imaging applications, where the features are raw pixels, this type of interpretability is unnecessary.…”
Section: Disentangled Autoencodersmentioning
confidence: 99%
“…Nevertheless, the diagnosis of COVID through imaging is a delicate problem, and its solution should present high robustness to risk in order to be useful. A number of authors have written on the challenges of such a use Arun et al [2020], DeGrave et al [2020], Wynants et al [2020] with some pointing that probably the most promising use of X-rays would be at assessing disease severity and progression in a prognostic approach Cohen et al [2020b], Manna et al [2020]. With this in mind, this methodology processes a different use of the data set, where classes of patients with different ICU-related outcomes are adopted.…”
Section: Data Setsmentioning
confidence: 99%
“…One popular technique for interpretation on convolutional networks is the saliency map, which provides a heatmap overlay of network attention calculated on a gradient basis Simonyan and Zisserman [2014]. Nevertheless, there are still serious concerns as to whether saliency mapping techniques accurately reflect trained model parameters Adebayo et al [2018] since they do not have shown to be robust under rigorous examination in the context of medial imaging Arun et al [2020].…”
mentioning
confidence: 99%
“…(2) Explain black box neural networks posthoc by creating approximations, saliency maps, or derivatives. Posthoc explanations can be problematic; for instance saliency maps highlight regions of the image (show attention), but can be unreliable and misleading, as they tend to highlight edges and do not show what computation is actually done with the highlighted pixels [Rudin, 2019;Adebayo et al, 2018;Arun et al, 2020]. There are several types of approaches in interpretable machine learning, including case-based reasoning (which we use here), forcing the network to use logical conditions within its last layers [e.g., Wu and Song, 2019], or disentangling the neural network's latent space [e.g., .…”
Section: Introductionmentioning
confidence: 99%