2018
DOI: 10.1631/fitee.1700789
|View full text |Cite
|
Sign up to set email alerts
|

Recent advances in efficient computation of deep convolutional neural networks

Abstract: Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks also continue to increase. This will pose a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implement… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
98
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 210 publications
(98 citation statements)
references
References 85 publications
(108 reference statements)
0
98
0
Order By: Relevance
“…3) The Key Novelty of our Paper Related to NN Study: FPGAs are attractive devices to accelerate NN since they represent an intermediate point between the power and performance efficiency of ASICs and the programmability of CPUs and GPUs [67], [68], [69], [70]. One of the key components of FPGAs that directly impacts the performance of FPGAbased NNs is built-in BRAMs, due to the high-demand of NN computations for the parallel data access, as described in detail for recent FPGA-based accelerators in this survey paper [14].…”
Section: B Recent Related Studies On Nnsmentioning
confidence: 99%
“…3) The Key Novelty of our Paper Related to NN Study: FPGAs are attractive devices to accelerate NN since they represent an intermediate point between the power and performance efficiency of ASICs and the programmability of CPUs and GPUs [67], [68], [69], [70]. One of the key components of FPGAs that directly impacts the performance of FPGAbased NNs is built-in BRAMs, due to the high-demand of NN computations for the parallel data access, as described in detail for recent FPGA-based accelerators in this survey paper [14].…”
Section: B Recent Related Studies On Nnsmentioning
confidence: 99%
“…Cheng et al [25], Guo et al [49], Cheng et al [24] and Sze et al [136] surveyed algorithms for DNN compression and acceleration. Of these, Cheng et al [24] brie y evaluated system-level designs for FPGA implementation. Guo et al only surveyed quantisation methods; weight reduction was not mentioned.…”
Section: Introductionmentioning
confidence: 99%
“…This paper uses the CNN algorithm; as one of the deep learning models, its multilevel structure of local perception and value sharing can accommodate high-dimensional data more accurately and quickly. Due to the characteristics of self-learning, especially the results are stable in the field of image and signal recognition, and there is no additional feature engineering requirement [33].…”
Section: Introductionmentioning
confidence: 99%