2020
DOI: 10.1016/j.patrec.2020.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the decisions of CNNs: An in-model approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…These activation maps can be visualized to see what characteristics are being picked up by the network to work toward final prediction results. An understanding of how deep learning models decide is very important for the deployment of robust, transparent, and trustworthy systems in real-world situations (Rio-Torto et al, 2020). Toward this end, most methods adopt a gradient-based approach and produce explanations called heatmaps by propagating pixel-wise relevance backward to the input of the network to highlight .…”
Section: Methodsmentioning
confidence: 99%
“…These activation maps can be visualized to see what characteristics are being picked up by the network to work toward final prediction results. An understanding of how deep learning models decide is very important for the deployment of robust, transparent, and trustworthy systems in real-world situations (Rio-Torto et al, 2020). Toward this end, most methods adopt a gradient-based approach and produce explanations called heatmaps by propagating pixel-wise relevance backward to the input of the network to highlight .…”
Section: Methodsmentioning
confidence: 99%
“…Model explainability is essential for gaining trust and acceptance of AI systems in high-stakes areas, such as healthcare, where reliability and safety are critical [43], [44]. Medical anomaly detection [45], healthcare risk prediction system [46], [47], [48], [49], genetics [50], [51], and healthcare image processing [52], [53], [54] are some of the areas that are moving towards adoption of XAI. Another area is finance, such as AI-based credit score decisions [55], [56] and counterfeit banknotes detection [57].…”
Section: Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%
“…In contrast to the previous approaches, Rio-Torto et al [102] proposed an in-model joint architecture composed of an explainer and a classifier to produce visual explanations for the predicted class labels. The explainer consists of an encoder-decoder network based on U-Net, and the classifier is based on VGG-16.…”
Section: Saliencymentioning
confidence: 99%
“…Recently, Rio-Torto et al [102] proposed the POMPOM (Percentage of Meaningful Pixels Outside the Mask) metric, which report the number of meaningful pixels outside the region of interest in relation to the total number of pixels, to evaluate the quality of a given explanation. Similarly, Barnett et al [13] introduced the Activation Precision evaluation metric, to quantify the proportion of relevant information from the "relevant region" used to classify the mass margin regarding the radiologist annotations.…”
Section: Evaluating the Quality Of Visual Explanationsmentioning
confidence: 99%
See 1 more Smart Citation