2020
DOI: 10.1609/aaai.v34i03.5632
|View full text |Cite
|
Sign up to set email alerts
|

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

Abstract: As Deep Neural Networks (DNNs) have demonstrated superhuman performance in a variety of fields, there is an increasing interest in understanding the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective of separating the relevant (positive) and irrelevant (negative) attributions according to the relative influence between the layers. The relevance of each neuron is identified with respect to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(37 citation statements)
references
References 16 publications
0
36
0
1
Order By: Relevance
“…The space may be analyzed and visualized to help users understand the performance of these combinations, with respect to image classes and CNN models. This is out of the scope of VisLRPDesigner as an LRP design tool but leads to a critical direction in future work. LRP for DNNs : More technique options could be added in the future, including layers fusing for Batch‐Normalization [GHK ∗ 20] vs. its bypassing in this system, bias switching, and different attribute‐discriminative LRP approaches [NGC ∗ 20]. Besides, While designed mostly for CNNs, LRP is also used in other DNNs such as natural language processing [AHM ∗ 16], EEG analysis [SLSM16], and audio classification [BAL ∗ 18].…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…The space may be analyzed and visualized to help users understand the performance of these combinations, with respect to image classes and CNN models. This is out of the scope of VisLRPDesigner as an LRP design tool but leads to a critical direction in future work. LRP for DNNs : More technique options could be added in the future, including layers fusing for Batch‐Normalization [GHK ∗ 20] vs. its bypassing in this system, bias switching, and different attribute‐discriminative LRP approaches [NGC ∗ 20]. Besides, While designed mostly for CNNs, LRP is also used in other DNNs such as natural language processing [AHM ∗ 16], EEG analysis [SLSM16], and audio classification [BAL ∗ 18].…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…In this section, I have chosen some representative examples to illustrate the application of my proposed taxonomy in Figure 4. Within the category of Diagnostic-explanation are Saliency Maps [25], LIME [29], and Shapley Values [13] which identify particular input features important to affecting the output of models. The Explication-explanation category focuses on techniques to render explanations or model output understandable to humans interacting with the AI model.…”
Section: Organizing Present Xai Methodsmentioning
confidence: 99%
“…More specifically, we use LRP to calculate relevance heatmaps which highlight regions in input space that speak for or against the classification decision. The LRP-implementations follow [34] for ResNet and [35], [36] for transformers. We implemented LRP rules for the BoTNet, the Inception model as well as the hybrid models along the same lines, none of which have been discussed in the literature so far.…”
Section: B Modelsmentioning
confidence: 99%