2020
DOI: 10.1007/978-981-15-8135-9_6
|View full text |Cite
|
Sign up to set email alerts
|

GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…On the software side, several libraries have been proposed to improve the support for GNNs and handle its multiple variants, being the extensions of popular libraries such as PyTorch or Tensorflow (TF) [47], [162], [163] clear examples. On the hardware side, new accelerator architectures have been surfacing recently [40], [42], [117], [164] that attempt to deal with the flexibility and scalability challenges of GNNs. In the next subsections, we provide an exhaustive overview of existing techniques.…”
Section: Algorithm Aggregation (A)mentioning
confidence: 99%
“…On the software side, several libraries have been proposed to improve the support for GNNs and handle its multiple variants, being the extensions of popular libraries such as PyTorch or Tensorflow (TF) [47], [162], [163] clear examples. On the hardware side, new accelerator architectures have been surfacing recently [40], [42], [117], [164] that attempt to deal with the flexibility and scalability challenges of GNNs. In the next subsections, we provide an exhaustive overview of existing techniques.…”
Section: Algorithm Aggregation (A)mentioning
confidence: 99%
“…The high density and complexity of GCNs make the on-chip communication for IMC-based accelerators even more critical. Authors in [30,31] proposed an IMC-based accelerator for GCN. However, these technique does not address the issue of the on-chip communication performance of GCN accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, plenty of GCN/GNN accelerators have been presented [21]- [29], [80] for efficient GCN inference. As far as we know, GraphACT [54] is the only GCN training accelerator.…”
Section: Related Work a Gcn Accelerationmentioning
confidence: 99%
“…Various GCN accelerators have been proposed to accelerate GCN inference [21]- [29]. However, existing accelerators and GPU/CPU platforms do not easily support GCN training at scale because GCN training tasks have many peculiar challenges.…”
Section: Introductionmentioning
confidence: 99%