2021
DOI: 10.1109/access.2021.3051171
|View full text |Cite
|
Sign up to set email alerts
|

Explaining CNN and RNN Using Selective Layer-Wise Relevance Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 15 publications
0
25
0
Order By: Relevance
“…The experimental results proved that the representations at different layers hold diverse levels of meaning. Jung et al (2021) proposed a novel relevance-based algorithm, which was called elective layer-wise relevance propagation, to explain the image classification and the text classification model?s output through the visualization. The authors used the activations selectively to calculate each activation?s gradient for the output label, which incorporated the true positive gradients during the computational process.…”
Section: • Convolutional Neural Networkmentioning
confidence: 99%
“…The experimental results proved that the representations at different layers hold diverse levels of meaning. Jung et al (2021) proposed a novel relevance-based algorithm, which was called elective layer-wise relevance propagation, to explain the image classification and the text classification model?s output through the visualization. The authors used the activations selectively to calculate each activation?s gradient for the output label, which incorporated the true positive gradients during the computational process.…”
Section: • Convolutional Neural Networkmentioning
confidence: 99%
“…Research has been done over the last years to improve explainabilty on convolutional neural networks by highlighting features on the input image. The approach to overlay heatmaps on input data images has been picked up multiple times [7,11,12] and different algorithms have been described to improve the heatmap's quality. The heatmap is displayed as colored pixels on the original input, where deep red shows areas of interest to the neural network.…”
Section: Related Workmentioning
confidence: 99%
“…The heatmap is displayed as colored pixels on the original input, where deep red shows areas of interest to the neural network. Showing the interest of explainability as a feature for artificial intelligence in general, the approach using heatmaps is adopted onto recurrent neural networks as well [7].…”
Section: Related Workmentioning
confidence: 99%
“…[20] evaluates how much each type of explanation provides reliable information to people for three different conditions. [21] uses SLRP (a modified version of Layer-wise Relevance Propagation) to find the propagate relevances for each layer of the deep learning models to detect the category of the things in the images, for CNN and RNN. [22] applies Class activation mapping (CAM) to CNN, which uses the weighted sum of image filters of the same size.…”
Section: Introductionmentioning
confidence: 99%
“…In detail, we divide the entire image into many rectangular sub-images with the same width and height. We call each divided sub-image a part, and then we make each modified image by LIME based methods [7,13] feature (or feature masks) [10,11,15,22,25] LRP-based methods [14,16,21] others [9, 12, 17-20, 23, 24] Table 1 compares each paper from the related works by the category of the method used. Among [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25], [6,8] use the difference of the input values of a neural network to generate explanations.…”
Section: Introductionmentioning
confidence: 99%