2018 52nd Asilomar Conference on Signals, Systems, and Computers 2018
DOI: 10.1109/acssc.2018.8645378
|View full text |Cite
|
Sign up to set email alerts
|

Classification with Vertex-Based Graph Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 5 publications
1
4
0
Order By: Relevance
“…This is consistent with the concept of convolution in graph signal processing [2]. TAGCN designs a set of fixed-size learnable filters whose topologies are adaptive to the topology of the graph as the filters scan the graph to perform convolution, see also [8], [9]. Other implementations, such as GraphSAGE [10] and graph attention networks (GATs) [11], are also defined directly in the vertex domain of the graph and apply a learned, convolutionlike aggregation function.…”
Section: Introductionsupporting
confidence: 68%
See 1 more Smart Citation
“…This is consistent with the concept of convolution in graph signal processing [2]. TAGCN designs a set of fixed-size learnable filters whose topologies are adaptive to the topology of the graph as the filters scan the graph to perform convolution, see also [8], [9]. Other implementations, such as GraphSAGE [10] and graph attention networks (GATs) [11], are also defined directly in the vertex domain of the graph and apply a learned, convolutionlike aggregation function.…”
Section: Introductionsupporting
confidence: 68%
“…We concentrate on three implementations of GCNNs, derived from different definitions of graph convolution: graph convolutional networks (GCNs) [6], GraphSAGE [10], and topology-adaptive graph convolutional networks (TAGCNs) [7], [9].…”
Section: A Graph Convolutional Layermentioning
confidence: 99%
“…As a variant of GCN, Topology adaptive graph convolutional networks (TAGCN) avoids the eigen decomposition [21], but calculates the coefficients of the adjacency matrix [22], reducing the computational complexity. In addition, TAGCN uses a set of filters of size-1 to size-K to extract the features in each convolutional layer.…”
Section: Tagcnmentioning
confidence: 99%
“…Spectral domain approach [31]: The absence of graph translation invariance poses difficulties in defining convolutional neural networks in the nodal domain. The spectral domain approach uses the convolution theorem to define the graph convolution from the spectral domain.…”
Section: Spectral Domain-based Graph Convolution Network (Sgc)mentioning
confidence: 99%