2021
DOI: 10.1117/1.jei.30.5.050901
|View full text |Cite
|
Sign up to set email alerts
|

Review of white box methods for explanations of convolutional neural networks in image classification tasks

Abstract: In recent years, deep learning has become prevalent to solve applications from multiple domains. Convolutional neural networks (CNNs) particularly have demonstrated state-ofthe-art performance for the task of image classification. However, the decisions made by these networks are not transparent and cannot be directly interpreted by a human. Several approaches have been proposed to explain the reasoning behind a prediction made by a network. We propose a topology of grouping these methods based on their assump… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Methodology. In [1] and [13] the methodology of evaluation of explanation methods was proposed. It consists in the comparison of the pixel importance maps obtained by the network sensing with the maps expressing human perception of the same visual content.…”
Section: Evaluation Of Mlfem Explanationsmentioning
confidence: 99%
See 2 more Smart Citations
“…Methodology. In [1] and [13] the methodology of evaluation of explanation methods was proposed. It consists in the comparison of the pixel importance maps obtained by the network sensing with the maps expressing human perception of the same visual content.…”
Section: Evaluation Of Mlfem Explanationsmentioning
confidence: 99%
“…The need for explanations of Deep Neural Networks (DNN) decisions has led to an active research in eXplainable Artificial Intelligence (XAI). In the field of pattern recognition for images and videos, explanation of a DNN's decision consists in identifying the set of input pixels that have contributed the most into the decision [1]. A famous example of decisions on a wrong data is given by Ribeiro et al [2]: here, a trained classifier wrongly used the presence of snow as the distinguishing feature between the "Wolf" and "Husky" classes.…”
Section: Introduction and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Even though DL models are commonly referred to as black boxes, there are ways to visualize the regions of the imaging data to which they contribute the most when making predictions. This is called DL interpretation (or explanation ) mapping [ 49 ]. Despite its uttermost importance, the majority of research employing DL classifier or regressor models did not report interpretation mapping for their models.…”
Section: Synthesismentioning
confidence: 99%
“…The copyright holder for this this version posted September 12, 2022. ; https://doi.org/10.1101/2022.09.09.507144 doi: bioRxiv preprint As standard CNN decision functions are not easily invertible (Finzi et al, 2019), voxels' contributions to the image-level predictions are unavailable, leading to the development of several indirect approaches to quantify surrogate voxel-importance measures (Ayyar et al, 2021), of which most may be classified as backpropagation-based and perturbation-based (van der Velden et al, 2022). Class activation mapping approaches (Selvaraju et al, 2020;Simonyan et al, 2014;Zhou et al, 2016) estimate regions' importance by extracting gradients from the final convolutional layers.…”
Section: Introductionmentioning
confidence: 99%