2019
DOI: 10.1109/tcsi.2019.2921714
|View full text |Cite
|
Sign up to set email alerts
|

A Real-Time 17-Scale Object Detection Accelerator With Adaptive 2000-Stage Classification in 65 nm CMOS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Accelerating deep neural network processing in edge computing using energy-efficient platforms is an important goal [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Currently, most object detection and classification models are carried out in graphics processing units.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Accelerating deep neural network processing in edge computing using energy-efficient platforms is an important goal [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Currently, most object detection and classification models are carried out in graphics processing units.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, many lightweight approaches with low-power consumption and low-computational performance have emerged recently. A few dedicated neural network accelerators have been implemented on FPGA hardware platforms [ 12 , 14 , 17 , 21 , 23 ], while several authors proposed ASIC-based neural network accelerators [ 13 , 15 , 16 , 18 , 19 , 22 ]. Samimi et al [ 20 ] proposed a technique based on the residue number system to improve the energy efficiency of deep neural network processing.…”
Section: Introductionmentioning
confidence: 99%