Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403236
|View full text |Cite
|
Sign up to set email alerts
|

TinyGNN: Learning Efficient Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(36 citation statements)
references
References 10 publications
0
36
0
Order By: Relevance
“…Two other kinds of methods considered as inference acceleration are GNN-to-GNN KD like TinyGNN (Yan et al, 2020) and Graph Augmented-MLPs (GA-MLPs) like SGC (Wu et al, 2019) or SIGN (Frasca et al, 2020). Inference of GNN-to-GNN KD is likely to be slower than a GNN-Li with the same i as the student, since there will usually be overheads introduced by some extra modules like the Peer-Aware Module (PAM) in TinyGNN.…”
Section: How Do Glnns Compare To Other Inference Acceleration Methods?mentioning
confidence: 99%
See 1 more Smart Citation
“…Two other kinds of methods considered as inference acceleration are GNN-to-GNN KD like TinyGNN (Yan et al, 2020) and Graph Augmented-MLPs (GA-MLPs) like SGC (Wu et al, 2019) or SIGN (Frasca et al, 2020). Inference of GNN-to-GNN KD is likely to be slower than a GNN-Li with the same i as the student, since there will usually be overheads introduced by some extra modules like the Peer-Aware Module (PAM) in TinyGNN.…”
Section: How Do Glnns Compare To Other Inference Acceleration Methods?mentioning
confidence: 99%
“…Inference acceleration schemes have been proposed by hardware improvements (Chen et al, 2016;Judd et al, 2016) and algorithmic improvements through pruning (Han et al, 2015), quantization (Gupta et al, 2015), and KD (Hinton et al, 2015). Specifically for GNNs, pruning (Zhou et al, 2021) and quantizing GNN parameters (Tailor et al, 2021;Zhao et al, 2020), or distilling to smaller GNNs (Yang et al, 2021b;Yan et al, 2020;Yang et al, 2021a) have been studied. These approaches speed up GNN inference to a certain extent but do not eliminate the neighborhood-fetching latency.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, VGAER allows us to use any GNN model or plug and play module in the encoding stage, such as: GraphSAGE (Hamilton, Ying, and Leskovec 2017), which realizes inductive learning through neighborhood sampling encoding and greatly reduces the complexity of the algorithm. We can see the extension of VGAER on GraphSAGE in the next section for the scalability; GAT (Veličković et al 2017), which enables VGAER to capture the weights between different node modules; PAM (Yan et al 2020), which is actually a plug and play module with linear complexity, and faster incremental community detection can be achieved on the basis of Graph-SAGE inductive learning; GIN (Xu et al 2018), which introduce a to realize the injectivity of aggregation, thereby achieving a more powerful aggregation function.…”
Section: Referencementioning
confidence: 99%
“…Such operations indirectly capture as much deep neighborhood information as possible in the case of one layer. Let the v have K neighbors b l i K i=1 , the neighbor nodes after passing through the PAM are expressed as [16]:…”
Section: Algorithm 2 Feature Alignmentmentioning
confidence: 99%