2020
DOI: 10.1016/j.neunet.2020.07.008
|View full text |Cite
|
Sign up to set email alerts
|

k-hop graph neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 73 publications
(73 citation statements)
references
References 17 publications
0
49
0
Order By: Relevance
“…(2) The state-of-the-art GNNs: Graph convolution network (GCN) [37], Deep Graph CNN (DGCNN) [38], Graph Isomorphism Network (GIN) [3], Random Walk Graph Neural Network (RW-GNN) [39], Graph Attention Network (GAT) [2], Motif based Attentional Graph Convolutional Neural Network (MA-GCNN) [40].…”
Section: B Baselines For Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) The state-of-the-art GNNs: Graph convolution network (GCN) [37], Deep Graph CNN (DGCNN) [38], Graph Isomorphism Network (GIN) [3], Random Walk Graph Neural Network (RW-GNN) [39], Graph Attention Network (GAT) [2], Motif based Attentional Graph Convolutional Neural Network (MA-GCNN) [40].…”
Section: B Baselines For Comparisonmentioning
confidence: 99%
“…For a fair comparison, all of the methods are run on a system with an Intel Xeon CPU E3-1270 v5 processor. Due to RW-GNN cannot accept NCI1 as input [39], the running time of RW-GNN on NCI1 dataset is vacant. Obviously, TL-GNN requires more time than GIN because of the extra computation of subgraph-level and attention mechanisms.…”
Section: Comparison Of Computational Complexitymentioning
confidence: 99%
“…While GCKN utilizes the local walk and path only starting from the central node, our model considers any walks (up to a maximal length) within the subgraph around the central node, and can thus explore more topological structures. Another recent work by Nikolentzos and Vazirgiannis (2020) focused on improving model transparency by calculating the graph kernels between trainable hidden graphs and the entire graph. However, the method only supports a single-layer model and lacks theoretical interpretation.…”
Section: Combination Of Graph Kernel and Gnnsmentioning
confidence: 99%
“…Du et al (2019) followed the opposite direction and proposed a new graph kernel which corresponds to infinitely wide multi-layer GNNs trained by gradient descent, while Al-Rfou et al (2019) proposed an unsupervised method for learning graph representations by comparing the input graphs against a set of source graphs. Finally, Nikolentzos and Vazirgiannis (2020) proposed a neural network model whose first layer consists of a number of latent graphs which are compared against the input graphs using a random walk kernel. The emerging kernel values are fed into a fully-connected neural network which acts as the classifier or regressor.…”
Section: Other Modelsmentioning
confidence: 99%