2022
DOI: 10.48550/arxiv.2205.13733
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Consistency in Graph Neural Network Interpretation

Abstract: Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. These identified sub-structures can provide interpretations of GNN's behavior. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. An inductive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Additionally, works like Factorizable Graphs [46,60] and Decoupled Graphs [55] propose methods to unveil latent groups of nodes or edges and convey messages on disentangled graphs. Recent investigations have also delved into the trustworthiness [12,38,39,52] and interpretability [11,62,64] of GNNs. More recent progresses of GNN can be found in these surveys [25,71].…”
Section: Related Work 21 Graph Neural Networkmentioning
confidence: 99%
“…Additionally, works like Factorizable Graphs [46,60] and Decoupled Graphs [55] propose methods to unveil latent groups of nodes or edges and convey messages on disentangled graphs. Recent investigations have also delved into the trustworthiness [12,38,39,52] and interpretability [11,62,64] of GNNs. More recent progresses of GNN can be found in these surveys [25,71].…”
Section: Related Work 21 Graph Neural Networkmentioning
confidence: 99%
“…Second, it increases the models' transparency to enable trusted applications in decision-critical fields sensitive to fairness, privacy and safety challenges, such as healthcare and drug discovery [11]. Thus, studying the explainability of GNNs is attracting increasing attention and many efforts have been taken [12], [13], [14].…”
Section: Introductionmentioning
confidence: 99%
“…This occurs when the statistical characteristics of newly incoming data differ from those observed by the model in a dynamically changing environment [3,13,17,21,32]. Generally, the shift can happen without any precursor, be unknown to users, cause dramatic personal injury for systems like self-driving [8], robotics [24,31], sleep tracking [22] and irreparable economic damage on financial trading algorithms [16,20,22,27]. At any moment, the model is expected to 1) provide early warnings when the data distribution changes and 2) make accurate predictions by adapting to new data.…”
Section: Introductionmentioning
confidence: 99%