2019
DOI: 10.1016/j.neucom.2018.09.038
|View full text |Cite
|
Sign up to set email alerts
|

Recent advances in convolutional neural network acceleration

Abstract: In recent years, convolutional neural networks (CNNs) have shown great performance in various fields such as image classification, pattern recognition, and multi-media compression. Two of the feature properties, local connectivity and weight sharing, can reduce the number of parameters and increase processing speed during training and inference. However, as the dimension of data becomes higher and the CNN architecture becomes more complicated, the endto-end approach or the combined manner of CNN is computation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
120
0
8

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 348 publications
(165 citation statements)
references
References 106 publications
(113 reference statements)
0
120
0
8
Order By: Relevance
“…Availability of large training data and hardware advancements are the factors that contributed to the advancement in CNN research. But the main driving force that have accelerated the research and give rise to CNNs based image classification tasks are parameter optimization strategies and new architectural ideas [41], [44], [100]. The main breakthrough in CNN performance was brought by AlexNet, which showed exemplary performance [26] in 2012-ILSVRC (reduced error rate from 25.8 to 16.4) as compared to conventional CV techniques [26].…”
Section: Rise Of Cnn: 2012-2014mentioning
confidence: 99%
“…Availability of large training data and hardware advancements are the factors that contributed to the advancement in CNN research. But the main driving force that have accelerated the research and give rise to CNNs based image classification tasks are parameter optimization strategies and new architectural ideas [41], [44], [100]. The main breakthrough in CNN performance was brought by AlexNet, which showed exemplary performance [26] in 2012-ILSVRC (reduced error rate from 25.8 to 16.4) as compared to conventional CV techniques [26].…”
Section: Rise Of Cnn: 2012-2014mentioning
confidence: 99%
“…In particular, in general purpose solutions based on the use of a microcontroller, the limited available memory limits the complexity of the network, with possible impact on the accuracy of the system [7]. In the same way, microcontroller-based systems feature the worst trade-off between power consumption and timing performances [8].…”
Section: Introductionmentioning
confidence: 99%
“…In particular, field-programmable gate arrays (FPGAs) represent an interesting trade-off between cost, flexibility, and performances [11], especially for applications whose architectures have been changing too rapidly to rely on application-specific integrated circuits (ASICs) and whose production volumes might be not sufficient. FPGAs offer high flexibility at the same time, which permits the implementation of different models with a high degree of parallelism [8] and the possibility of customizing the architecture for a specific application. e aim of this paper is to investigate the use of custom FPGA-based hardware accelerators to realize a CNN-based KWS system, analysing their performances in terms of power consumption, number of hardware resources, accuracy, and timing.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, the acceleration of deep neural network alongside advanced parallel computing, faster algorithm, and cloud computing, distributed deep learning systems present an opportunity for 5G to build the intelligence in its systems to deliver high throughput and ultra-low latency. There have been some recent efforts in deep neural network acceleration [43]. The acceleration of deep neural network, can be at three levels: architecture level, computation level, and implementation level.…”
Section: Future Research Directionsmentioning
confidence: 99%