2020 IEEE International Symposium on High Performance Computer Architecture (HPCA) 2020
DOI: 10.1109/hpca47549.2020.00012
|View full text |Cite
|
Sign up to set email alerts
|

HyGCN: A GCN Accelerator with Hybrid Architecture

Abstract: Inspired by the great success of neural networks, graph convolutional neural networks (GCNs) are proposed to analyze graph data. GCNs mainly include two phases with distinct execution patterns. The Aggregation phase, behaves as graph processing, showing a dynamic and irregular execution pattern. The Combination phase, acts more like the neural networks, presenting a static and regular execution pattern. The hybrid execution patterns of GCNs require a design that alleviates irregularity and exploits regularity.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
217
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 241 publications
(218 citation statements)
references
References 26 publications
(40 reference statements)
1
217
0
Order By: Relevance
“…The increasing deployment of domain-specific accelerator design for neural network [1,18] and graph [4,8] motivates the development of customized GNN accelerators on both FPGAs and ASICs recently. The authors in [26] abstracted the execution flow of GCN into the aggregation stage and the combination stage and then developed an accelerator called HyGCN based on the processing stages. They leveraged both the SIMD and the systolic arrays to deal with the coexistence problem of GNNs.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The increasing deployment of domain-specific accelerator design for neural network [1,18] and graph [4,8] motivates the development of customized GNN accelerators on both FPGAs and ASICs recently. The authors in [26] abstracted the execution flow of GCN into the aggregation stage and the combination stage and then developed an accelerator called HyGCN based on the processing stages. They leveraged both the SIMD and the systolic arrays to deal with the coexistence problem of GNNs.…”
Section: Related Workmentioning
confidence: 99%
“…Although specialized GNN accelerators of ASICs such as HyGCN [26] and EnGN [7], are tailored as alternatives to CPU and GPU, they are still not flexible enough to meet the requirements of various GNN-based applications from the cloud to edge. First and foremost, because graph learning is a fast-developing field and novel GNN architectures keep emerging, ASIC-based accelerators of fixed architecture sometimes cannot run state-of-the-art GNN models.…”
Section: Introductionmentioning
confidence: 99%
“…AWB-GCN [11] implements amount of process elements for multiply-accumulate-cell (MAC) and balance the workload of process elements to accelerate the sparse matrix multiplication. Other works [10,13,14] accelerate GCNs by designing efficient pipeline architecture, optimizing memory mode, and increasing parallelism.…”
Section: B Deep Learning Inference Acceleratorsmentioning
confidence: 99%
“…In contrast, there are two memory synchronizations between the three steps of each layer in GATs because of the masked self-attention mechanism [1]. The basic calculation of the existing works on GCNs and CNNs [10][11][12][13][14][15][16][17][18][19] has no change which still use multiplication with heavy dependence on DSPs. Moreover, the loss of accuracy is not detailed studied.…”
Section: B Deep Learning Inference Acceleratorsmentioning
confidence: 99%
“…Recently, we have done plenty of graph processing research on shared-memory systems and graph-specific accelerators (Yan et al 2020(Yan et al , 2019. With the explosive growth of graph data (CAP Communications 2019), the shared-memory (single-node) systems can no longer achieve high performance for large-scale graphs due to the limited memory capacity.…”
Section: Introductionmentioning
confidence: 99%