2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD) 2021
DOI: 10.1109/iccad51958.2021.9643465
|View full text |Cite
|
Sign up to set email alerts
|

Crossbar based Processing in Memory Accelerator Architecture for Graph Convolutional Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…• Fixed Function Accelerators, specialized for specific tasks, maximize energy efficiency when targeting specialized kernels [57]. While excelling in performance and energy, they lose efficiency when generalized for multiple computations.…”
Section: B Background On Domain-specific Architectures 1) Overview Of...mentioning
confidence: 99%
“…• Fixed Function Accelerators, specialized for specific tasks, maximize energy efficiency when targeting specialized kernels [57]. While excelling in performance and energy, they lose efficiency when generalized for multiple computations.…”
Section: B Background On Domain-specific Architectures 1) Overview Of...mentioning
confidence: 99%
“…The high density and complexity of GCNs make the on-chip communication for IMC-based accelerators even more critical. Authors in [30,31] proposed an IMC-based accelerator for GCN. However, these technique does not address the issue of the on-chip communication performance of GCN accelerators.…”
Section: Related Workmentioning
confidence: 99%
“…PIM-GCN. PIM-GCN [72] is the first in-memory accelerator for GCN and demonstrates the mapping of GCN inference on the ReRAM crossbar architecture. The compute and memory access characteristics of GCN are different from the graph analytics and convolutional neural networks.…”
Section: Pim-based Graph Learning Acceleratorsmentioning
confidence: 99%
“…The selection of which data should be mapped on the ReRAM influences the data and writes movement overhead significantly. Unlike the previous work PIM-GCN [72] that fixed the static data on the ReRAM, TARe [75] proposes a task adaptive selection algorithm that selects the static data according to the task and a ReRAM in situ accelerator that supports weight-static, data-static, and hybrid execution mode. For a graph learning workload, the task adaptive selection algorithm first selects the static data and decides the sparse or dense mapping mode.…”
Section: Pim-based Graph Learning Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation