2022
DOI: 10.48550/arxiv.2205.10900
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated Gradient-based Localization

Abstract: Neural networks are becoming increasingly better at tasks that involve classifying and recognizing images. At the same time techniques intended to explain the network output have been proposed. One such technique is the Gradient-based Class Activation Map (Grad-CAM), which is able to locate features of an input image at various levels of a convolutional neural network (CNN), but is sensitive to the vanishing gradients problem. There are techniques such as Integrated Gradients (IG), that are not affected by tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 8 publications
0
1
0
Order By: Relevance
“…However, these models lack explainability [21,22,31,46]. Although many XDL methods have been proposed for natural image problems [47][48][49], relatively less attention has been paid to model explainability in the context of brain imaging applications [19,50]. Consequently, the lack of interpretability in the models has been a concern for radiologists and healthcare professionals that find the black-box nature of the models inadequate for their needs.…”
Section: Related Workmentioning
confidence: 99%
“…However, these models lack explainability [21,22,31,46]. Although many XDL methods have been proposed for natural image problems [47][48][49], relatively less attention has been paid to model explainability in the context of brain imaging applications [19,50]. Consequently, the lack of interpretability in the models has been a concern for radiologists and healthcare professionals that find the black-box nature of the models inadequate for their needs.…”
Section: Related Workmentioning
confidence: 99%
“…To further evaluate the performance of GATL, we visualized the Class Activation Map (CAM) of the features extracted by GATL, as shown in Figure 10. CAM is a tool that helps researchers [51] visualize CNNs. It can clearly show the image regions that the network is focusing on.…”
Section: ) Class Activation Map Visualisationmentioning
confidence: 99%