2021
DOI: 10.1109/tcsi.2021.3059882
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…DNN accelerators have been developed with various design approaches [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Due to the data-centric property in recent ASIC-based DNN accelerators, in which a significantly large amount of data should be processed and transferred in and out of the accelerator chips, memory plays an important role.…”
Section: Dnn Acceleratorsmentioning
confidence: 99%
“…DNN accelerators have been developed with various design approaches [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Due to the data-centric property in recent ASIC-based DNN accelerators, in which a significantly large amount of data should be processed and transferred in and out of the accelerator chips, memory plays an important role.…”
Section: Dnn Acceleratorsmentioning
confidence: 99%
“…Accelerating deep neural network processing in edge computing using energy-efficient platforms is an important goal [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Currently, most object detection and classification models are carried out in graphics processing units.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, many lightweight approaches with low-power consumption and low-computational performance have emerged recently. A few dedicated neural network accelerators have been implemented on FPGA hardware platforms [ 12 , 14 , 17 , 21 , 23 ], while several authors proposed ASIC-based neural network accelerators [ 13 , 15 , 16 , 18 , 19 , 22 ]. Samimi et al [ 20 ] proposed a technique based on the residue number system to improve the energy efficiency of deep neural network processing.…”
Section: Introductionmentioning
confidence: 99%