2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
DOI: 10.1109/icassp.2015.7178127
|View full text |Cite
|
Sign up to set email alerts
|

An energy-efficient memory-based high-throughput VLSI architecture for convolutional networks

Abstract: In this paper, an energy efficient, memory-intensive, and high throughput VLSI architecture is proposed for convolutional networks (C-Net) by employing compute memory (CM) [1], where computation is deeply embedded into the memory (SRAM). Behavioral models incorporating CM's circuit non-idealities and energy models in 45 nm SOI CMOS are presented. System-level simulations using these models demonstrate that the probability of handwritten digit recognition P r > 0.99 can be achieved using the MNIST database [2],… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
2

Relationship

3
7

Authors

Journals

citations
Cited by 33 publications
(29 citation statements)
references
References 5 publications
0
29
0
Order By: Relevance
“…Setting c = MPC yields MPC (dB) = 10 log 10 ( MPC ) 2 indicating that c is a decreasing function of MPC . Thus, (14) has the same form as (1) with an additional (last term) clipping noise factor.…”
Section: The Minimum Precision Criterion (Mpc)mentioning
confidence: 99%
“…Setting c = MPC yields MPC (dB) = 10 log 10 ( MPC ) 2 indicating that c is a decreasing function of MPC . Thus, (14) has the same form as (1) with an additional (last term) clipping noise factor.…”
Section: The Minimum Precision Criterion (Mpc)mentioning
confidence: 99%
“…To enable sparse distributed memory, compute memory has been proposed as a viable implementation architecture [46]. Compute memory [44], [45] is an in-memory processing architecture that implements both memory and processing in a single architecture in order to completely eliminate the processor-memory interface. The compute memory architecture implements inference algorithms in the periphery of the memory array, and does not modify the core bit-cell array, thus maintaining the storage density.…”
Section: Secure Microarchitecturesmentioning
confidence: 99%
“…Recently, we proposed Compute Memory (CM) [2], [3] an inmemory computing architecture, where both computation and storage are implemented in a low swing/low signal-to-noise ratio (SNR) domain thereby eliminating the processor-memory interface completely, and providing a 5.0 × energy reduction and 4.9 × throughput enhancement for pattern recognition application in a 45 nm CMOS process. CM preserves the storage density, the conventional SRAM's read/write functionality, and is well-suited for inference kernels such as SDM which can compensate for non-deterministic hardware operations.…”
Section: Introductionmentioning
confidence: 99%