2020
DOI: 10.48550/arxiv.2010.00130
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computing Graph Neural Networks: A Survey from Algorithms to Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…How to effectively compute GNNs in order to realise their full potential will be a key research topic in the coming years. Several hardware accelerators have been developed to cope with GNNs' high density and alternating computing requirements, but there is not a clear proposal applicable to multiple GNN variants [17]. On the software side, current deep learning frameworks including extensions of popular libraries such as TensorFlow and PyTorch have limitations when implementing dynamic computation graphs along with specialized tensor operations [17].…”
Section: B Challenges In Adapting Graph-based Deep Learning Methods F...mentioning
confidence: 99%
See 1 more Smart Citation
“…How to effectively compute GNNs in order to realise their full potential will be a key research topic in the coming years. Several hardware accelerators have been developed to cope with GNNs' high density and alternating computing requirements, but there is not a clear proposal applicable to multiple GNN variants [17]. On the software side, current deep learning frameworks including extensions of popular libraries such as TensorFlow and PyTorch have limitations when implementing dynamic computation graphs along with specialized tensor operations [17].…”
Section: B Challenges In Adapting Graph-based Deep Learning Methods F...mentioning
confidence: 99%
“…Several hardware accelerators have been developed to cope with GNNs' high density and alternating computing requirements, but there is not a clear proposal applicable to multiple GNN variants [17]. On the software side, current deep learning frameworks including extensions of popular libraries such as TensorFlow and PyTorch have limitations when implementing dynamic computation graphs along with specialized tensor operations [17]. Thus, there is a need to further develop libraries such as DGL [246] which may handle the sparsity of GNN operations efficiently, as well as complex tensor operations in CUDA with GPU computation acceleration.…”
Section: B Challenges In Adapting Graph-based Deep Learning Methods F...mentioning
confidence: 99%
“…predictions) for either a node, an edge, or the entire graph. The GNN inference process is generally divided into two computation stages, namely, aggregation and combination [5]. First, the data from nodes or edges is loaded from the memory hierarchy to the Processing Elements (PEs).…”
Section: Accelerators Algorithms Supportedmentioning
confidence: 99%
“…Representation of data-sets as graph-based data structures, given the opportunities they provision to understand complex relationships embedded in them, has become increasingly popular [1]- [5]. These graph representations can range from very small (chemistry) to extremely huge (recommendation systems) graphs [6], [7].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation