2019
DOI: 10.48550/arxiv.1904.00605
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

Abstract: As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, there is an increasing interest in understanding the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective of separating the relevant (positive) and irrelevant (negative) attributions according to the relative influence between the layers. The relevance of each neuron is identified with respect to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
3

Relationship

5
3

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 23 publications
(8 reference statements)
0
15
0
Order By: Relevance
“…Relative Attributing Propagation RAP [18] decomposes the output predictions of DNNs in terms of relative influence among the neurons, resulting in assigning the relevant and irrelevant attributions with a bi-polar importance. By changing the perspective from value to influence, the generated visual explanations show the characteristics of strong objectness and clear distinction of relevant and irrelevant attributions.…”
Section: Layerwise Relevance Propagationmentioning
confidence: 99%
See 1 more Smart Citation
“…Relative Attributing Propagation RAP [18] decomposes the output predictions of DNNs in terms of relative influence among the neurons, resulting in assigning the relevant and irrelevant attributions with a bi-polar importance. By changing the perspective from value to influence, the generated visual explanations show the characteristics of strong objectness and clear distinction of relevant and irrelevant attributions.…”
Section: Layerwise Relevance Propagationmentioning
confidence: 99%
“…DeepLIFT [22] propagates the differences of contribution scores between the activated neurons and their reference activation. Recently proposed Relative attributing propagation (RAP) [18] is a method for decomposing the positive (relevant) and negative (irrelevant) relevance to each neuron, according to its relative influence among the neurons. By changing the perspective from value to influence, it shows a clear distinction of relevant/irrelevant attributions with a high objectness score.…”
mentioning
confidence: 99%
“…The task of generating a heatmap that indicates local relevancy from the perspective of a CNN observing an input image has been tackled from many different directions, including gradient-based methods [32,33,31], attribution methods [2,24,25,11], and image manipulation methods [6,7,22].…”
Section: Explainabilitymentioning
confidence: 99%
“…Pointing Game Pointing Game (Zhang et al 2018) assesses the attribution methods by computing the matching Method mIOU Grad-CAM (threshold: mean) + CRF 52.14 Segmentation Prop (Guillaumin, Küttel, and Ferrari 2014a) 57.30 DeepMask (Pinheiro, Collobert, and Dollár 2015) 58.69 RAP (Nam et al 2019) 59.46 DeepSaliency (Li et al 2016) 62.12 Pixel Objectness (Xiong, Jain, and Grauman 2018) 64.22 RSP 60.81 RSP + CRF 64.51…”
Section: Evaluating Quality Of Attributionsmentioning
confidence: 99%