2019 53rd Asilomar Conference on Signals, Systems, and Computers 2019
DOI: 10.1109/ieeeconf44664.2019.9048796
|View full text |Cite
|
Sign up to set email alerts
|

Pooling in Graph Convolutional Neural Networks

Abstract: Graph convolutional neural networks (GCNNs) are a powerful extension of deep learning techniques to graphstructured data problems. We empirically evaluate several pooling methods for GCNNs, and combinations of those graph pooling methods with three different architectures: GCN, TAGCN, and GraphSAGE. We confirm that graph pooling, especially DiffPool, improves classification accuracy on popular graph classification datasets and find that, on average, TAGCN achieves comparable or better accuracy than GCN and Gra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…This activation function rectifies the input values that have less than zero, by forcing them to be zero [23]. The following two layers flatten the generated feature map using mean-pooling [23,24] on segments of length two within the generated feature map.…”
Section: Categorical Vector With Convolutional Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…This activation function rectifies the input values that have less than zero, by forcing them to be zero [23]. The following two layers flatten the generated feature map using mean-pooling [23,24] on segments of length two within the generated feature map.…”
Section: Categorical Vector With Convolutional Neural Networkmentioning
confidence: 99%
“…Pooling is used to gradually decrease the dimensions of the attribute representation. Hence, it provides low computational cost by shrinking memory cost and the number of parameters [23,24]. Mean-pooling calculates the average of each feature values of the feature map [24].…”
Section: Categorical Vector With Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Pooling is used to gradually decrease the dimensions of the attribute representation. Hence, it provides low computational cost by shrinking memory cost and the number of parameters [28][29].Average pooling and max-pooling are the most popular pooling techniques. While average pooling calculates the average of each feature values of the feature map [29], max pooling instead returns the maximum value of the feature values.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…This layer uses 64 filters with ReLU (Rectified linear unit function) activation function [27], which has an almost linear function and therefore retains the properties of linear models. These properties provide ease to optimize with gradient descent methods.The following two layers flatten the generated feature map using average-pooling [28][29] on segments of length two within the generated feature map.The next layer is a standard feed-forward layer in a deep learning architecture which transforms the generated feature vector using ReLU activation [27]. The final layer then uses softmax activation [27] to generate a vector.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%