2020 57th ACM/IEEE Design Automation Conference (DAC) 2020
DOI: 10.1109/dac18072.2020.9218751
|View full text |Cite
|
Sign up to set email alerts
|

Hardware Acceleration of Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(55 citation statements)
references
References 11 publications
0
53
0
1
Order By: Relevance
“…Even though advances in sparse/irregular tensor processing [34] and graph processing [63,154] may prove useful in accelerating GNNs, addressing their unique computing challenges requires more specialized proposals. Some attempts have been done from a software perspective, i.e., adapting the GNN operations to better match the capabilities of CPUs or GPUs [106,144,155]; and from a hardware perspective, i.e., designing custom processors tailored to the demands of GNNs [7,53,103,164]. However, recent surveys and reviews [11,16,19,66,91,160,181,185] lack a comprehensive analysis of such advances.…”
Section: Deep Learning On Graphsmentioning
confidence: 99%
“…Even though advances in sparse/irregular tensor processing [34] and graph processing [63,154] may prove useful in accelerating GNNs, addressing their unique computing challenges requires more specialized proposals. Some attempts have been done from a software perspective, i.e., adapting the GNN operations to better match the capabilities of CPUs or GPUs [106,144,155]; and from a hardware perspective, i.e., designing custom processors tailored to the demands of GNNs [7,53,103,164]. However, recent surveys and reviews [11,16,19,66,91,160,181,185] lack a comprehensive analysis of such advances.…”
Section: Deep Learning On Graphsmentioning
confidence: 99%
“…Ideally, developing a method to parallelize feature extraction across subareas (as they are independent of one another) may offer a significant improvement to inference time. Future works for accelerating graph processing on the hardware level will also offer a significant improvement of inference time [41].…”
Section: Discussionmentioning
confidence: 99%
“…Outside of HEP, hardware and firmware acceleration of GNN inference, and graph processing in general, has been an active area of study in recent years, motivated by the intrinsic inefficiencies of CPUs and GPUs when dealing with graph data (Besta et al, 2019;Gui et al, 2019). Nurvitadhi et al, 2014;Ozdal et al, 2016;Auten et al, 2020;Geng et al, 2020;Kiningham et al, 2020;Yan et al, 2020;Zeng and Prasanna, 2020 describe examples of GNN acceleration architectures. Auten et al, 2020;Geng et al, 2020;Yan et al, 2020;Zeng and Prasanna, 2020.…”
Section: Related Workmentioning
confidence: 99%
“…Nurvitadhi et al, 2014;Ozdal et al, 2016;Auten et al, 2020;Geng et al, 2020;Kiningham et al, 2020;Yan et al, 2020;Zeng and Prasanna, 2020 describe examples of GNN acceleration architectures. Auten et al, 2020;Geng et al, 2020;Yan et al, 2020;Zeng and Prasanna, 2020. are specific to the graph convolutional network (GCN) (Kipf and Welling, 2017), while the graph inference processor (GRIP) architecture in Kiningham et al, (2020) is efficient across a wide range of GNN models.…”
Section: Related Workmentioning
confidence: 99%