2020
DOI: 10.48550/arxiv.2006.09252
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting

Abstract: While Graph Neural Networks (GNNs) have achieved remarkable results in a variety of applications, recent studies exposed important shortcomings in their ability to capture the structure of the underlying graph. It has been shown that the expressive power of standard GNNs is bounded by the Weisfeiler-Lehman (WL) graph isomorphism test, from which they inherit proven limitations such as the inability to detect and count graph substructures. On the other hand, there is significant empirical evidence, e.g. in netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
78
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(85 citation statements)
references
References 48 publications
0
78
0
Order By: Relevance
“…This demonstrates the benefit (in one example) of FA over GA in view of Theorem 4 and approximation in equation 8. Of course, other, more powerful frames can be chosen, e.g., using higher order WL (Morris et al, 2019) or substructure counting (Bouritsas et al, 2020) that will further improve the approximation of equation 8. Table 3: n-body experiment (Satorras et al, 2021).…”
Section: Point Clouds: Normal Estimationmentioning
confidence: 99%
“…This demonstrates the benefit (in one example) of FA over GA in view of Theorem 4 and approximation in equation 8. Of course, other, more powerful frames can be chosen, e.g., using higher order WL (Morris et al, 2019) or substructure counting (Bouritsas et al, 2020) that will further improve the approximation of equation 8. Table 3: n-body experiment (Satorras et al, 2021).…”
Section: Point Clouds: Normal Estimationmentioning
confidence: 99%
“…Although our method and DGN [4] all require eigendecomposition in the preprocessing step, non-spatial GN significantly outperforms DGN on ZINC and MolPCBA. GIN [75] 509,549 0.526˘0.051 GraphSage [23] 505,341 0.398˘0.002 GAT [66] 531,345 0.384˘0.007 GCN [35] 505,079 0.367˘0.011 GatedGCN-PE [8] 505,011 0.214˘0.006 MPNN (sum) [22] 480,805 0.145˘0.007 PNA [16] 387,155 0.142˘0.010 DGN [4] -0.168˘0.003 GSN [7] 523,201 0.101˘0.010 GT [19] 588,929 0.226˘0.014 SAN [39] 508, 577 0.139˘0.006 GraphormerSLIM [79] GCN [35] 0.56M 20.20˘0.24 GIN [75] 1.92M 22.66˘0.28 GCN-VN [35] 2.02M 24.24˘0.34 GIN-VN [75] 3.37M 27.03˘0.23 GCN-VN+FLAG [38] 2.02M 24.83˘0.37 GIN-VN+FLAG [38] 3.37M 28.34˘0.38 DeeperG-VN+FLAG [43] 5.55M 28.42˘0.43 PNA [16] 6.55M 28.38˘0.35 DGN [4] 6.73M 28.85˘0.30 GINE-VN [10] 6.15M 29.17˘0.15 GINE-APPNP [10] 6.15M 29.79˘0.30 PHC-GNN [40] 1.69M 29.47˘0.26…”
Section: Resultsmentioning
confidence: 99%
“…The class of models that are based on the WL test are not in general locally permutation equivariant in that they still use a message passing model with permutation invariant update function. Despite this, many of these models inject permutation equivariant information into the feature space, which improves the expressivity of the models (Bouritsas et al, 2020;Morris et al, 2019a;Bodnar et al, 2021b;a). The information to be injected into the feature space is predetermined in these models by a choice of what structural or topological information to use, whereas our model uses representations of the permutation group, making it a very general model that still guarantees expressivity.…”
Section: Local Equivariant Graph Networkmentioning
confidence: 99%
“…Here a neural network architecture is built based on variants of the WL test. Bouritsas et al (2020) use a permutation invariant local update function, but incorporate permutation equivariant structural information into the feature space. Morris et al (2019a) build models based on different WL variants that consider local and global connections.…”
Section: Introductionmentioning
confidence: 99%