2021
DOI: 10.1109/tc.2020.3014632
|View full text |Cite
|
Sign up to set email alerts
|

EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
90
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 104 publications
(90 citation statements)
references
References 18 publications
0
90
0
Order By: Relevance
“…The main challenge in GNN accelerator design is the alternation of phases with either dense or extremely sparse computation. The sparsity is driven by the graph connectivity or the graph adjacency matrix [16]- [18]. On the other hand, phases of dense computations are usually due to the dense nature of the operations that are applied to the nodes and edges in parallel [16]- [18].…”
Section: Acceleratorsmentioning
confidence: 99%
See 2 more Smart Citations
“…The main challenge in GNN accelerator design is the alternation of phases with either dense or extremely sparse computation. The sparsity is driven by the graph connectivity or the graph adjacency matrix [16]- [18]. On the other hand, phases of dense computations are usually due to the dense nature of the operations that are applied to the nodes and edges in parallel [16]- [18].…”
Section: Acceleratorsmentioning
confidence: 99%
“…The sparsity is driven by the graph connectivity or the graph adjacency matrix [16]- [18]. On the other hand, phases of dense computations are usually due to the dense nature of the operations that are applied to the nodes and edges in parallel [16]- [18]. Additionally, GNNs process input graphs that might have billions of nodes and edges, with uneven connectivity.…”
Section: Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to support more GNN models, Liang, etc. [7] proposed a fine-grained GNN processing model and developed a corresponding GNN accelerator EnGN. They had a 2D computing array combined with Automatic accelerator generation.…”
Section: Related Workmentioning
confidence: 99%
“…Although specialized GNN accelerators of ASICs such as HyGCN [26] and EnGN [7], are tailored as alternatives to CPU and GPU, they are still not flexible enough to meet the requirements of various GNN-based applications from the cloud to edge. First and foremost, because graph learning is a fast-developing field and novel GNN architectures keep emerging, ASIC-based accelerators of fixed architecture sometimes cannot run state-of-the-art GNN models.…”
Section: Introductionmentioning
confidence: 99%