Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539177
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Heterogeneous Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 6 publications
0
10
0
Order By: Relevance
“…Zheng et al [142] formulate the graph partition problem in the DistDGL as a multi-constraint partition problem that aims at balancing training/validation/test vertices/edges in each partition. They adopt the multi-constraint mechanism in METIS to realize the customized objective of graph partition for both homogeneous graphs and heterogeneous graphs [143], and further expand this mechanism to a two-level partitioning strategy that handles the situation with multi-GPUs and multi-machines and balances the computation. Lin et al [67] apply streaming graph partition -Linear Deterministic Greedy (LGD) [89] and use a new affinity score (E.q.…”
Section: Graph Partition In Gnnmentioning
confidence: 99%
See 3 more Smart Citations
“…Zheng et al [142] formulate the graph partition problem in the DistDGL as a multi-constraint partition problem that aims at balancing training/validation/test vertices/edges in each partition. They adopt the multi-constraint mechanism in METIS to realize the customized objective of graph partition for both homogeneous graphs and heterogeneous graphs [143], and further expand this mechanism to a two-level partitioning strategy that handles the situation with multi-GPUs and multi-machines and balances the computation. Lin et al [67] apply streaming graph partition -Linear Deterministic Greedy (LGD) [89] and use a new affinity score (E.q.…”
Section: Graph Partition In Gnnmentioning
confidence: 99%
“…Operator-parallel execution model enables the inter-batch parallelism and generates multi-computation graph in parallel by mixing the execution of all operators in the pipeline. Zheng et al [143] decompose the computation graph generation into multiple stages (e.g., graph sampling, neighborhood subgraph construction, feature extraction) and each stage can be treated as an operator. Dependencies exist among different operators, while there are no dependencies among mini-batches.…”
Section: Execution Model Of Computation Graph Generationmentioning
confidence: 99%
See 2 more Smart Citations
“…To reduce computational and memory costs of GNN training, many GNN systems consider a sample of neighbors in a minibatch (a batch of training vertices instead of all vertices) to reduce the memory and computational load. Many systems adopt this execution model because it has been observed to have a positive effect on training [19,36] and because the smaller memory footprint allows the use of GPUs which have limited memory compared to CPUs.…”
Section: Introductionmentioning
confidence: 99%