2018
DOI: 10.1609/aaai.v32i1.11782
|View full text |Cite
|
Sign up to set email alerts
|

An End-to-End Deep Learning Architecture for Graph Classification

Abstract: Neural networks are typically designed to deal with data in tensor forms. In this paper, we propose a novel neural network architecture accepting graphs of arbitrary structure. Given a dataset containing graphs in the form of (G,y) where G is a graph and y is its class, we aim to develop neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
157
0
4

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 754 publications
(161 citation statements)
references
References 15 publications
(24 reference statements)
0
157
0
4
Order By: Relevance
“…We use the sum-pool operation, which involves adding the node representations. Following a technique similar to [22], we preserve the initial, intermediate node representations along with the final representations. The pooling operation involves summing the node representations at every convolution step, including the initial embeddings.…”
Section: ∑︁ 𝑁 (𝑖))mentioning
confidence: 99%
See 1 more Smart Citation
“…We use the sum-pool operation, which involves adding the node representations. Following a technique similar to [22], we preserve the initial, intermediate node representations along with the final representations. The pooling operation involves summing the node representations at every convolution step, including the initial embeddings.…”
Section: ∑︁ 𝑁 (𝑖))mentioning
confidence: 99%
“…We arrived at this configuration after a parametric sweep of convolutional layers ranging from 0 to 8. Using the technique proposed in [22], we preserve the initial and intermediate embeddings for graph-level readout. To generate a representation for the entire graph, we sum across stages after every convolution and use a linear layer to obtain the runtime.…”
mentioning
confidence: 99%
“…There are many other kernels including the shortest path (Borgwardt and Kriegel 2005)-, random walk (Vishwanathan et al 2010;Sugiyama and Borgwardt 2015;Zhang et al 2018b)-, and spectrum-based (Kondor and Borgwardt 2008;Kondor et al 2009;Kondor and Pan 2016;Verma and Zhang 2017) approaches. The Weisfeiler-Lehman (WL) kernel Shervashidze et al 2011), which is based on the graph isomorphism test, is a popular and empirically successful kernel that has been employed in many studies (Yanardag and Vishwanathan 2015;Niepert et al 2016;Narayanan et al 2017;Zhang et al 2018a). Again, all such approaches are unsupervised, and it is difficult to interpret results from the perspective of sub-structures of a graph.…”
Section: Related Workmentioning
confidence: 99%
“…The deep graph kernel (Yanardag and Vishwanathan 2015) incorporates neural language modeling, where decomposed sub-structures of a graph are regarded as sentences. The PATCHY-SAN (Niepert et al 2016) and DGCNN (Zhang et al 2018a) convert a graph to a tensor by using the WL-Kernel and convolute it. Several other studies also have combined popular convolution techniques with graph data (Tixier et al 2018;Atwood and Towsley 2016;Simonovsky and Komodakis 2017).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation