2021 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2021
DOI: 10.23919/date51398.2021.9473949
|View full text |Cite
|
Sign up to set email alerts
|

ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks

Abstract: Graph Neural Network (GNN) is a variant of Deep Neural Networks (DNNs) operating on graphs. However, GNNs are more complex compared to traditional DNNs as they simultaneously exhibit features of both DNN and graph applications. As a result, architectures specifically optimized for either DNNs or graph applications are not suited for GNN training. In this work, we propose a 3D heterogeneous manycore architecture for on-chip GNN training to address this problem. The proposed architecture, ReGraphX, involves hete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Subsequently, Spara presented a novel vertex mapping strategy to address this challenge [52]. There are also ReRAM-based architectures for graph processing focus on sparsity [53,54], three-dimensional architecture [55,56], regularization, redundant computation [57], etc. Transformers, one of the most advanced models in current natural language processing (NLP), present several challenges to the ReRAM-based neuromorphic computing [93].…”
Section: Reram-based Accelerators For Various Dnns and Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Subsequently, Spara presented a novel vertex mapping strategy to address this challenge [52]. There are also ReRAM-based architectures for graph processing focus on sparsity [53,54], three-dimensional architecture [55,56], regularization, redundant computation [57], etc. Transformers, one of the most advanced models in current natural language processing (NLP), present several challenges to the ReRAM-based neuromorphic computing [93].…”
Section: Reram-based Accelerators For Various Dnns and Applicationsmentioning
confidence: 99%
“…These accelerators demonstrated improved speed and energy efficiency compared to traditional computing platforms such as CPUs and GPUs. With the continuous advancement of ReRAM technology, the ReRAM-based neuromorphic engines are being applied in broader domains [48][49][50][51][52][53][54][55][56][57][58][59][60][61].…”
Section: Introductionmentioning
confidence: 99%
“…However, these technique does not address the issue of the on-chip communication performance of GCN accelerators. Recently, a RRAM-based 3D NoC-enabled accelerator for GNN training ReGraphX is proposed [16]. The authors show that the proposed architecture is more energy-efficient than conventional GPUs.…”
Section: Related Workmentioning
confidence: 99%
“…This section compares the performance of our proposed COIN architecture with a state-of-the-art GCN accelerator, ReGraphX [16] and AWB-GCN [15]. Comparison with ReGraphX [16]: The architecture proposed in ReGraphX is composed of multiple processing elements (PEs; similar to computing elements in COIN). Some of the PEs (V-PEs) store the weights and are responsible for the feature extraction operation at GCN nodes (or vertices).…”
Section: H Comparison With State-of-the-art Gcn Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation