2022
DOI: 10.1109/access.2022.3169423
|View full text |Cite
|
Sign up to set email alerts
|

Practical Near-Data-Processing Architecture for Large-Scale Distributed Graph Neural Network

Abstract: Graph Neural Networks have drawn tremendous attention in the past few years due to their convincing performance and high interpretability in various graph-based tasks like link prediction and node classification. With the ever-growing graph size in the real world, especially for industrial graphs at a billionlevel, the storage of graphs can easily consume Terabytes so that the process of GNNs has to be processed in a distributed manner. As a result, the execution could be inefficient due to the expensive cross… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 40 publications
0
1
0
Order By: Relevance
“…Others such as GCNAX [44], BoostGCN [77], Rubik [14], GraphACT [76], GNNSampler [47], and Grip [39,40] optimizing data reuse are not applicable to LSD-GNN either, since the chance to find reuse within 512-node mini-batch compared with 10+ billion total nodes is extremely low. Huang et al [34] works on a similar problem like this paper, but under a different disaggregated memory pool context. Sparse tensor hardware: Architecture innovations from sparse tensor accelerators [32,58,62,81] can be adopted to solve the SM-GNN problems, since the full-batch training of SM-GNN can be well processed with sparseMM on a single machine.…”
Section: Related Workmentioning
confidence: 96%
“…Others such as GCNAX [44], BoostGCN [77], Rubik [14], GraphACT [76], GNNSampler [47], and Grip [39,40] optimizing data reuse are not applicable to LSD-GNN either, since the chance to find reuse within 512-node mini-batch compared with 10+ billion total nodes is extremely low. Huang et al [34] works on a similar problem like this paper, but under a different disaggregated memory pool context. Sparse tensor hardware: Architecture innovations from sparse tensor accelerators [32,58,62,81] can be adopted to solve the SM-GNN problems, since the full-batch training of SM-GNN can be well processed with sparseMM on a single machine.…”
Section: Related Workmentioning
confidence: 96%