2017 IEEE International Symposium on High Performance Computer Architecture (HPCA) 2017
DOI: 10.1109/hpca.2017.55
|View full text |Cite
|
Sign up to set email alerts
|

PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
395
0
5

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 645 publications
(400 citation statements)
references
References 49 publications
0
395
0
5
Order By: Relevance
“…[118] Furthermore, it removes the high-cost ADC/DAC components and replaces with spiking-based read/output circuits. [118] Furthermore, it removes the high-cost ADC/DAC components and replaces with spiking-based read/output circuits.…”
Section: Rram-based In-memory Computing Microarchitecturementioning
confidence: 99%
See 1 more Smart Citation
“…[118] Furthermore, it removes the high-cost ADC/DAC components and replaces with spiking-based read/output circuits. [118] Furthermore, it removes the high-cost ADC/DAC components and replaces with spiking-based read/output circuits.…”
Section: Rram-based In-memory Computing Microarchitecturementioning
confidence: 99%
“…Shafiee et al [116] Microarchitecture DAC/ADC Processor/macro DNN, CNN ImageNet [139] Song et al [118] Microarchitecture IFC Processor DNN, CNN Image recognition, NN c) training…”
Section: Workmentioning
confidence: 99%
“…By leveraging the ReRAM structure, various CNN accelerators have been proposed for the inference or training [5,8]. The kernel in a convolution layer is a tensor with 4 dimensions and its mapping on crossbar requires a complicated design [8,9]. Fig.…”
Section: A Reram-based Cnn Acceleratormentioning
confidence: 99%
“…Once a round of computation in sub-crossbars is completed, we add the output from corresponding SCs to obtain the final deconvolution results. Thanks to the vertical sum-up design in the existing ReRAM-based accelerators [8,12], no extra circuitry is needed to realize the addition operations in pixelwise mapping.…”
Section: B Red Architecturementioning
confidence: 99%
See 1 more Smart Citation