2021
DOI: 10.1109/jssc.2020.3029235
|View full text |Cite
|
Sign up to set email alerts
|

An Energy-Efficient Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…[119] proposes a similar strategy, but the sign estimation is done either after representing weights in ternary format, or after using a sign function, which simplifies the computations, while maintaining the prediction accuracy. Minkyu Kim et al [61] exploits the max-pooling layers and adopts a precision-cascading scheme to predict and calculate only the maximum value of a convolution operation. This technique, combined with a zero-skipping scheme, can efficiently avoid redundant computations without affecting neuron synapses that contribute a lot in classification accuracy.…”
Section: Skippingmentioning
confidence: 99%
See 2 more Smart Citations
“…[119] proposes a similar strategy, but the sign estimation is done either after representing weights in ternary format, or after using a sign function, which simplifies the computations, while maintaining the prediction accuracy. Minkyu Kim et al [61] exploits the max-pooling layers and adopts a precision-cascading scheme to predict and calculate only the maximum value of a convolution operation. This technique, combined with a zero-skipping scheme, can efficiently avoid redundant computations without affecting neuron synapses that contribute a lot in classification accuracy.…”
Section: Skippingmentioning
confidence: 99%
“…With respect to the employed precision, the Skipping approximation techniques mainly use high precision (i.e., 16-bit and 32-bit precision). However, [61] and [94] use lower precision, i.e., 12-bit and 8-bit respectively. Again, when considering simpler evaluation cases (e.g., LeNet, MNIST, and/or 32-bit precision) Skipping approximation delivers minimal accuracy loss and very high energy gains.…”
Section: Performance Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Many energy-efficient hardware accelerators have been proposed to reduce power consumption and improve the speed of DCNN computing in recent years. These accelera-tors based on application-specific integrated circuits (ASIC) [7,8,9,10,11,12,13,14,15,16,17,18,19] and fieldprogrammable gate array (FPGA) [20,21,22,23,24,25,26] have achieved low latency and high efficiency on CNN computing. Two classic CNN of AlexNet and VGG have been demonstrated the excellent performance earlier, including UNPU [7], DSIP [12], Eyeriss [13], and DNPU [18].…”
Section: Introductionmentioning
confidence: 99%
“…Two classic CNN of AlexNet and VGG have been demonstrated the excellent performance earlier, including UNPU [7], DSIP [12], Eyeriss [13], and DNPU [18]. Moreover, some works exploited the accelerators performance by reducing off-chip memory access [8,27,28].…”
Section: Introductionmentioning
confidence: 99%