2022
DOI: 10.1109/tpami.2021.3115452
|View full text |Cite
|
Sign up to set email alerts
|

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

Abstract: Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e. by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 114 publications
(87 citation statements)
references
References 48 publications
0
73
0
Order By: Relevance
“…additivity or attention mechanisms) in the model [16], [92], [14], [59], [87]. Finally, some methods, commonly referred as higher-order methods aim to identify not individual features, but groups of features, that contribute only when occurring jointly [32], [81], [26], [66], [21], [47]. Moreover, methods have been proposed or adjusted to fit specific model architectures (e.g.…”
Section: Other Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…additivity or attention mechanisms) in the model [16], [92], [14], [59], [87]. Finally, some methods, commonly referred as higher-order methods aim to identify not individual features, but groups of features, that contribute only when occurring jointly [32], [81], [26], [66], [21], [47]. Moreover, methods have been proposed or adjusted to fit specific model architectures (e.g.…”
Section: Other Methodsmentioning
confidence: 99%
“…A further distinction can be made between methods that explain based on individual input features [78], [11], [10], [79], combinations of input features (e.g. [16], [66], [26]), or higher-level concepts [40]. While explanation methods are often evaluated based on their technical merit (e.g.…”
Section: A Brief Review Of Xaimentioning
confidence: 99%
See 1 more Smart Citation
“…Previous works on explainability in graph neural networks include gradient-based methods Pope et al [2019], graph decomposition Schnake et al [2021] graph perturbations Ying et al [2019], Luo et al [2020], and a local approximation using simpler models Huang et al [2020].…”
Section: Related Workmentioning
confidence: 99%
“…We now test the performance of different explanation methods using an input perturbation scheme in which the most or least relevant input nodes are considered Schnake et al [2021]. Two different settings are considered for the Graph Transformer experiments.…”
Section: Quantitative Evaluationmentioning
confidence: 99%