SC20: International Conference for High Performance Computing, Networking, Storage and Analysis 2020
DOI: 10.1109/sc41405.2020.00074
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Communication in Graph Neural Network Training

Abstract: Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks.We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in den… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
61
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(62 citation statements)
references
References 39 publications
1
61
0
Order By: Relevance
“…Even though advances in sparse/irregular tensor processing [34] and graph processing [63,154] may prove useful in accelerating GNNs, addressing their unique computing challenges requires more specialized proposals. Some attempts have been done from a software perspective, i.e., adapting the GNN operations to better match the capabilities of CPUs or GPUs [106,144,155]; and from a hardware perspective, i.e., designing custom processors tailored to the demands of GNNs [7,53,103,164]. However, recent surveys and reviews [11,16,19,66,91,160,181,185] lack a comprehensive analysis of such advances.…”
Section: Deep Learning On Graphsmentioning
confidence: 99%
“…Even though advances in sparse/irregular tensor processing [34] and graph processing [63,154] may prove useful in accelerating GNNs, addressing their unique computing challenges requires more specialized proposals. Some attempts have been done from a software perspective, i.e., adapting the GNN operations to better match the capabilities of CPUs or GPUs [106,144,155]; and from a hardware perspective, i.e., designing custom processors tailored to the demands of GNNs [7,53,103,164]. However, recent surveys and reviews [11,16,19,66,91,160,181,185] lack a comprehensive analysis of such advances.…”
Section: Deep Learning On Graphsmentioning
confidence: 99%
“…Various distributed-memory parallel sparse matrix times tallskinny dense matrix (SpMM) algorithms are buried in application papers. In particular, MPI_FAUN [20] implements the 1.5D algorithm we use in our paper and CAGNET [30] implements the bulksynchronous 2D algorithm presented here. Neither work study the impacts of partitioning strategy in local computation costs as we do here.…”
Section: Related Workmentioning
confidence: 99%
“…We consider a wide variety of sparse matrix inputs that represent problems from graph neural networks (reddit and amazon [30]), eigensolvers in nuclear structure calculations (nm7 and nm8 [4]), low-rank or non-negative matrix factorization (com-Orkut [20]), and bioinformatics (isolates [6]). Various properties of these matrices are presented in Table 2.…”
Section: Evaluation 61 Setupmentioning
confidence: 99%
See 2 more Smart Citations