2017
DOI: 10.1109/tvlsi.2017.2654298
|View full text |Cite
|
Sign up to set email alerts
|

VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing

Abstract: The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention: many applications in fact require high-speed operations that suit a hardware implementation. However, numerous elements and complex interconnections are usually required, leading to a large area occupation and copious power consumption. Stochastic computing has shown promising results for low-power area-efficient hardware implementations, even though existing stochastic algorithms require long streams that ca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
109
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 208 publications
(109 citation statements)
references
References 34 publications
0
109
0
Order By: Relevance
“…The origins of stochastic computing lie in the observation that time series data of stochastic spike trains in the brain could be modeled by stochastic jumps from ground to V dd in a logic circuit [58][59][60]. It is no surprise, then, that neural network structures have been implemented successfully and energy efficiently in recent stochastic computing work [61][62][63][64]. Rather than carrying out high level arithmetic and logic operations to "theoretically predict" a neural network's output, stochastic computing implements neuromorphic models of the network in CMOS circuitry.…”
Section: Application To Neural Networkmentioning
confidence: 99%
“…The origins of stochastic computing lie in the observation that time series data of stochastic spike trains in the brain could be modeled by stochastic jumps from ground to V dd in a logic circuit [58][59][60]. It is no surprise, then, that neural network structures have been implemented successfully and energy efficiently in recent stochastic computing work [61][62][63][64]. Rather than carrying out high level arithmetic and logic operations to "theoretically predict" a neural network's output, stochastic computing implements neuromorphic models of the network in CMOS circuitry.…”
Section: Application To Neural Networkmentioning
confidence: 99%
“…x [1] x [2] x [3] x [4] x x [3] x [4] x [1] x FC layers. For CONV layers, the number of inputs for inner product computation is less compared with that for FC layers.…”
Section: Proposed Sc-based Dnnmentioning
confidence: 99%
“…SC is inherently an approximate computing technique [15] [52], and there have been disputes about the suitability of SC for precise computing applications [3]. On the other hand, the DNN inference engine is essentially an approximate computing application.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Notice that for either unipolar or bipolar coding format, the represented number ranges in [0, 1] or [-1, 1]. To represent a number beyond this range, a pre-scaling operation [21] or integer bit-stream based representation [22] can be used to relax this constraint.…”
Section: B Stochastic Computingmentioning
confidence: 99%