2020
DOI: 10.1145/3442322.3442324
|View full text |Cite
|
Sign up to set email alerts
|

The Expressive Power of Graph Neural Networks as a Query Language

Abstract: In this paper we survey our recent results characterizing various graph neural network (GNN) architectures in terms of their ability to classify nodes over graphs, for classifiers based on unary logical formulas- or queries. We focus on the language FOC2, a well-studied fragment of FO. This choice is motivated by the fact that FOC2 is related to theWeisfeiler-Lehman (WL) test for checking graph isomorphism, which has the same ability as GNNs for distinguishing nodes on graphs. We unveil the exact relationship … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
49
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(70 citation statements)
references
References 16 publications
1
49
0
Order By: Relevance
“…A notable exception is the work by Barceló et al [7,6], which inspired our present work. Barceló et al were the first to consider expressiveness of GNNs uniformly over all graphs (note, however, the earlier work of Hella et al [19] on similar message-passing distributed computation models).…”
Section: Related Workmentioning
confidence: 91%
“…A notable exception is the work by Barceló et al [7,6], which inspired our present work. Barceló et al were the first to consider expressiveness of GNNs uniformly over all graphs (note, however, the earlier work of Hella et al [19] on similar message-passing distributed computation models).…”
Section: Related Workmentioning
confidence: 91%
“…In the previous section, we considered this problem for regular expressions, but such patterns can be specified in other frameworks, ranging from logic-based declarative languages [17,45] to more procedural frameworks such as graph neural networks [58,72]. The goal of this part of the document is to show a recently established tight connection between these apparently different frameworks [18,60,84], which has interesting corollaries in terms of the use of declarative formalisms to specify patterns, versus the use of procedural formalisms to efficiently evaluate them.…”
Section: Local Propertiesmentioning
confidence: 99%
“…The architecture for a graph neural network G defined in this section, that is referred to as an aggregate-combine graph neural network [18], turns G into a classifier [58,72]. But also G can be considered as a unary query that is true for a node 𝑢 of a vectorlabeled graph V if, and only if, the output of G is true for 𝑢, that is G(𝑢, V) = true.…”
Section: Local Propertiesmentioning
confidence: 99%
See 2 more Smart Citations