2022
DOI: 10.48550/arxiv.2201.12872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Discovering Invariant Rationales for Graph Neural Networks

Abstract: Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features -rationale -which guides the model prediction. Unfortunately, the leading rationalization models often rely on data biases, especially shortcut features, to compose rationales and make predictions without probing the critical and causal patterns. Moreover, such data biases easily change outside the training distribution. As a result, these models suffer from a huge drop in interpretability and pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(21 citation statements)
references
References 11 publications
1
20
0
Order By: Relevance
“…This illustrates that subgraphs with a middle size are the very informative substructures to reveal the biological or chemical properties of small molecules. This finding persistently accords with the fact that motifs such as functional groups play a key part in determining molecular attributes [99,86,92]. While for Hamiltonian dynamic systems, J (m) D is majorly intense for low-order and middle-order interactions ( m n ≤ 0.6).…”
Section: For Molecular Property Prediction J (M)supporting
confidence: 65%
See 2 more Smart Citations
“…This illustrates that subgraphs with a middle size are the very informative substructures to reveal the biological or chemical properties of small molecules. This finding persistently accords with the fact that motifs such as functional groups play a key part in determining molecular attributes [99,86,92]. While for Hamiltonian dynamic systems, J (m) D is majorly intense for low-order and middle-order interactions ( m n ≤ 0.6).…”
Section: For Molecular Property Prediction J (M)supporting
confidence: 65%
“…For example, Cranmer et al [15] perform symbolic regression to components of well-trained GNNs and extract compact closed-form analytical expressions. A more mainstream line is to recognize an informative yet compressed subgraph from the original graph [99,86,87,92]. Identification of those subgraphs promotes GNNs to audit their inner workings and justify their predictions [92], which can shed light on meaningful scientific tasks like protein structure prediction [75].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, inspired by [30], we design three types of base graphs, i.e., Tree, Ladder, Wheel, and three types of motifs, i.e., Cycle, House, and Grid. With a mix ratio γ, motifs are preferably attached to base graphs.…”
Section: B More Experiments B1 Ability In Avoiding Spurious Explanationsmentioning
confidence: 99%
“…Graph-SST2 (Yuan et al 2020) and Mol-BACE (Hu et al 2020) are the benchmark in-domain (ID) datasets. To evaluate the generalization ability of GNN models, we build out-of-domain (OOD) datasets by following the principle of DIR (Wu et al 2022). Specifically, Spurious-Motif is an artificially generated dataset.…”
Section: Introductionmentioning
confidence: 99%