2021
DOI: 10.23915/distill.00033
|View full text |Cite
|
Sign up to set email alerts
|

A Gentle Introduction to Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
173
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 155 publications
(198 citation statements)
references
References 8 publications
1
173
0
2
Order By: Relevance
“…We perform experiments with all combinations of the aforementioned four datasets and three explainability techniques to quantify the explanation sharpening ability of SEEN. Since the ground-truth explanation (the binary labels whether each node is motif-participating) is available for the datasets due to their synthetic nature, we measure the area under the receiver operation characteristic curve (AUC-ROC) between the ground-truth and the obtained explanations, similar to Ying et al [12] and Sanchez-Lengeling et al [33]. Based on the measure, we calculate the difference in the explanation accuracy with or without SEEN on each combination.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We perform experiments with all combinations of the aforementioned four datasets and three explainability techniques to quantify the explanation sharpening ability of SEEN. Since the ground-truth explanation (the binary labels whether each node is motif-participating) is available for the datasets due to their synthetic nature, we measure the area under the receiver operation characteristic curve (AUC-ROC) between the ground-truth and the obtained explanations, similar to Ying et al [12] and Sanchez-Lengeling et al [33]. Based on the measure, we calculate the difference in the explanation accuracy with or without SEEN on each combination.…”
Section: Discussionmentioning
confidence: 99%
“…To evaluate the explanation sharpening performance of SEEN, we adopt the following three GNN explainability techniques, which are compatible with explaining node classification tasks: sensitivity analysis (SA) [9,11,19], Grad*Input [27,33], and GradCAM [9,21,33]. Explainability methods based on gradients and features are particularly well-suited for SEEN due to their fast and notraining-required characteristics.…”
Section: Explainability Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We aim to explain the decision of an already trained GNN model (post-hoc interpretability) by attributing the reason for the underlying decision to either a subset of features or neighboring nodes, or both. Current approaches to explain GNNs [21,26,31] produce feature attributions in a post-hoc manner, but they suffer from some fundamental limitations.…”
mentioning
confidence: 99%