Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining 2023
DOI: 10.1145/3539597.3570378
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative Explanations of Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…The existing GIB framework could work for simple synthetic datasets by relying on the implicit knowledge associated with the class and assuming a large decision margin between the two or more classes. However, in more practical scenarios like MUTAG, the existing approximation may be heavily affected by the distribution shifting problem [11,26].…”
Section: Generalized Gibmentioning
confidence: 99%
See 2 more Smart Citations
“…The existing GIB framework could work for simple synthetic datasets by relying on the implicit knowledge associated with the class and assuming a large decision margin between the two or more classes. However, in more practical scenarios like MUTAG, the existing approximation may be heavily affected by the distribution shifting problem [11,26].…”
Section: Generalized Gibmentioning
confidence: 99%
“…Examples of such methods include GNNExplainer [50], which determines the importance of nodes and edges through perturbation, and PGExplainer [24], which trains a graph generator to incorporate global information. Recent studies in the field [11,30] also contribute to the development of these methods. Post-hoc explainability methods can be classified under a label-preserving framework, where the explanation is a substructure of the original graph and preserves the information about the predicted label.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several pioneering studies have attempted to address this distributional challenge (Fang et al, 2023d;Zhang et al, 2023b;a). For example, CGE regards the GNN model as a transparent, fully accessible system.…”
Section: Preprint (A) Explanation Process (B) Ood Problemmentioning
confidence: 99%
“…For example, CGE regards the GNN model as a transparent, fully accessible system. It considers the GNN model as a teacher network and employs an additional "student" network to predict the labels of explanation subgraphs (Fang et al, 2023d). As another example, MixupExplainer (Zhang et al, 2023b) generates a mixed graph for the explanation by blending it with a non-explanatory subgraph from a different input graph.…”
Section: Preprint (A) Explanation Process (B) Ood Problemmentioning
confidence: 99%