Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13) 2019
DOI: 10.18653/v1/d19-5308
|View full text |Cite
|
Sign up to set email alerts
|

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

Abstract: Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain. Graph Convolutional Networks (GCN) allow this projection, but existing explainability methods do not exploit this fact, i.e. do not focus their explanations on intermediate states. In this work, we present a novel method that traces and visualizes features that contribute to a classification decision in the visible and hidden layers of a GCN. Our met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 52 publications
(20 citation statements)
references
References 12 publications
0
18
0
Order By: Relevance
“…These post-hoc methods are typically used to analyze individual feature input and output pairs, limiting their explainability to an individual-level. Several explanation methods have been presented in the literature including layer-wise relevance propagation (LRP) [ 225 ], excitation backpropagation [ 224 ], graph pruning (GNNExplainer) [ 223 ], gradient-based saliency (GraphGrad-CAM) [ 226 ], GraphGrad-CAM++ [ 227 ]), and layerwise relevance propagation (GraphLRP) [ 228 ].…”
Section: Research Challenges and Future Directionsmentioning
confidence: 99%
“…These post-hoc methods are typically used to analyze individual feature input and output pairs, limiting their explainability to an individual-level. Several explanation methods have been presented in the literature including layer-wise relevance propagation (LRP) [ 225 ], excitation backpropagation [ 224 ], graph pruning (GNNExplainer) [ 223 ], gradient-based saliency (GraphGrad-CAM) [ 226 ], GraphGrad-CAM++ [ 227 ]), and layerwise relevance propagation (GraphLRP) [ 228 ].…”
Section: Research Challenges and Future Directionsmentioning
confidence: 99%
“…A pioneering work on explanation techniques for GNNs was published in 2015 [251]. In the time since, several explanation methods have been presented including layer-wise relevance propagation (LRP) [252], excitation backpropagation [253], graph pruning (GNNEx-plainer) [250], gradient-based saliency (GRAPHGRAD-CAM) [254], GRAPHGRAD-CAM++ [255]), and layerwise relevance propagation (GRAPHLRP) [256]. Attention mechanisms adopted in several medical applications discussed in our survey have also been used as another explanation technique where the attention weights for edges can be used to measure edge importance; however, it is noted that they can only explain GAT models without explaining node features, unlike, for example, GNNExplainer.…”
Section: B Challenges In Adapting Graph-based Deep Learning Methods F...mentioning
confidence: 99%
“…The work [24] extends explanation techniques such as Grad-CAM or Excitation Backprop to the GNN model, and arrives at an attribution on nodes of the graph. In an NLP context, graph convolutional networks (GCNs) have been explained in terms of nodes and edges in the input graph using the LRP explanation method [25]. GNNExplainer [26] and PGExplainer [27] explain the model by extracting the subgraph that maximizes the mutual information to the prediction for the original graph.…”
Section: Explaining Graph Neural Networkmentioning
confidence: 99%