2020
DOI: 10.1007/978-3-030-58520-4_43
|View full text |Cite
|
Sign up to set email alerts
|

A Generic Visualization Approach for Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…saturation, zero-gradient image regions, and false confidence in the output score phenomena [70]) the computational cost Grad-CAMs is negligible compared to other methods that require multiple network forward-passes per image [70,130]. Moreover, in most recent works, Grad-CAM is used as the baseline methods from improvements margins [130][131][132][133][134].…”
Section: Visual Explanationmentioning
confidence: 99%
“…saturation, zero-gradient image regions, and false confidence in the output score phenomena [70]) the computational cost Grad-CAMs is negligible compared to other methods that require multiple network forward-passes per image [70,130]. Moreover, in most recent works, Grad-CAM is used as the baseline methods from improvements margins [130][131][132][133][134].…”
Section: Visual Explanationmentioning
confidence: 99%
“…Although gradient-based methods might not be the optimal solution for visual explanation (e.g., saturation, zero-gradient image regions, and false confidence in the output score phenomena [156]), the computational cost of Grad-CAMs is negligible when compared to other methods that require multiple network forward-passes per image [156,157]. Moreover, Grad-CAM is considered the reference method in several recent works [157][158][159][160][161].…”
Section: Making the Model Interpretablementioning
confidence: 99%
“…However, this subsymbolism (also known as the opaque or black-box model) is vulnerable to the underlying barrier of explainability in response to critical questions like how a particular trained model arrives at a decision, how certain it is about its decision, if and when it can be trusted, why it makes certain mistakes, and in which part of the learning algorithm or parametric space correction should take place [28], [4]. Explainability in CNNs is linked to post-hoc explainability [18] and, as proposed by Arrieta et al [4], relies on model simplification [56], [36], [23], feature-relevance estimation [6], [33], [29], [38], visualisation [53], [30], [26], [39], [48], [22], and architectural modification [27], [15], [40] to convert a non-interpretable model into an explainable one.…”
Section: Introductionmentioning
confidence: 99%