2019 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2019
DOI: 10.23919/date.2019.8715103
|View full text |Cite
|
Sign up to set email alerts
|

RED: A ReRAM-based Deconvolution Accelerator

Abstract: Deconvolution has been widespread in neural networks. For example, it is essential for performing unsupervised learning in generative adversarial networks or constructing fully convolutional networks for semantic segmentation. Resistive RAM (ReRAM)-based processing-in-memory architecture has been widely explored in accelerating convolutional computation and demonstrates good performance. Performing deconvolution on existing ReRAM-based accelerator designs, however, suffers from long latency and high energy con… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 15 publications
(25 reference statements)
0
4
0
Order By: Relevance
“…Beyond convolutional computing engine, a number of works utilize ReRAM crossbars to support different computations and applications [24,[55][56][57]. For instance, Bojnordi et al [24] implement the restricted Boltzmann machine with ReRAM arrays.…”
Section: Matrix-vector Multiplications Isaacmentioning
confidence: 99%
“…Beyond convolutional computing engine, a number of works utilize ReRAM crossbars to support different computations and applications [24,[55][56][57]. For instance, Bojnordi et al [24] implement the restricted Boltzmann machine with ReRAM arrays.…”
Section: Matrix-vector Multiplications Isaacmentioning
confidence: 99%
“…al [26] proposed to accelerate TCONV on FPGA. Fcnengine [23] and Red [5] further realizes a fully convolutional accelerator that can handle both CONV and TCONV operations using unified processing elements (PEs). These work only focus on inference task but do not support training function.…”
Section: Related Work and Motivationmentioning
confidence: 99%
“…[1]- [3], the data shuffling and the energy consumption while training GANs can be significantly reduced by performing in-memory VMM operations exploiting cross-point arrays of emerging non-volatile memories. Recently, several innovative deep convolutional (DC) GAN architectures including layer-wise pipelined computations [4], efficient deconvolutional operation [5], computational deformation technique to facilitate efficient utilization of computational resources in transpose convolution [6], and a ternary GAN [7] utilizing in-memory VMM engines based on RRAMs (with binary and 2-bit storage capability) and SOT-MRAMs were proposed. Moreover, a hybrid CMOSanalog RRAM-based implementation of DCGAN (without the pooling layer) including digital error propagation and weight update units was also proposed [8].…”
Section: Introductionmentioning
confidence: 99%