2021 3rd Novel Intelligent and Leading Emerging Sciences Conference (NILES) 2021
DOI: 10.1109/niles53778.2021.9600555
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Design of High-Speed Convolutional Neural Network Hardware Accelerator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Table 3 presents a performance comparison with related works. The hardware resource utilization, particularly in BRAM and DSP, in [ 35 ] is quite high compared to those of [ 37 ], leading to high power consumption according to the total on-chip power in [ 38 ]. The BRAM and DSP consumed approximately 60% of the total power.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Table 3 presents a performance comparison with related works. The hardware resource utilization, particularly in BRAM and DSP, in [ 35 ] is quite high compared to those of [ 37 ], leading to high power consumption according to the total on-chip power in [ 38 ]. The BRAM and DSP consumed approximately 60% of the total power.…”
Section: Resultsmentioning
confidence: 99%
“…To deploy deep learning hardware accelerators on edge devices, the hardware size must be small with a lightweight design owing to limited resource constraints. However, the hardware size of [ 36 ] and [ 38 ] is too large over thousands for all components, 324.7 k and 315.4 k as LUT, FF for [ 36 ], and 135.5 k as FF for [ 38 ] in VC709. The BRAM and DSP utilization is also high, so their power consumption is quite high as 27.7 W for [ 36 ] and 8.9 W for [ 38 ].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations