Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.3390/electronics9122200
|View full text |Cite
|
Sign up to set email alerts
|

CNN2Gate: An Implementation of Convolutional Neural Networks Inference on FPGAs with Automated Design Space Exploration

Abstract: Convolutional Neural Networks (CNNs) have a major impact on our society, because of the numerous services they provide. These services include, but are not limited to image classification, video analysis, and speech recognition. Recently, the number of researches that utilize FPGAs to implement CNNs are increasing rapidly. This is due to the lower power consumption and easy reconfigurability that are offered by these platforms. Because of the research efforts put into topics, such as architecture, synthesis, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…The Proposed DNN achieved an accuracy rate of 99.6 percent, with a 1.75 percent increase over other similar projects with hardware implementation of the DNN using the XSG. The error rate implementation using (8,6) scenario was 7e-4 with a difference of 28 percent less than similar projects and using 306 fewer hardware components, which represents 30 percent of the FPGA device space. In conclusion, the proposed DNN design successfully reduced the hardware space utilization on FPGA devices while achieving a higher accuracy rate.…”
Section: Discussionmentioning
confidence: 92%
“…The Proposed DNN achieved an accuracy rate of 99.6 percent, with a 1.75 percent increase over other similar projects with hardware implementation of the DNN using the XSG. The error rate implementation using (8,6) scenario was 7e-4 with a difference of 28 percent less than similar projects and using 306 fewer hardware components, which represents 30 percent of the FPGA device space. In conclusion, the proposed DNN design successfully reduced the hardware space utilization on FPGA devices while achieving a higher accuracy rate.…”
Section: Discussionmentioning
confidence: 92%
“…The facilitation of the hardware implementation of deep-learning-based algorithms is an active research area. Recent efforts include fpga-ConvNet, Caffeine, and CNN2Gate that were developed by Venieris and Bouganis [111], Zhang et al [112], and Ghaffari and Savaria [113], respectively. These frameworks facilitated FPGA prototypes of deep neural networks that were designed using well-known libraries, such as PyTorch and Caffe.…”
Section: Real-time Processingmentioning
confidence: 99%
“…In [12], the authors used reinforcement learning to explore optimization of deep neural networks on the ARM-Cortex-A CPUs. Likewise, in [13], the authors used a time-limited reinforcement learning [14] to execute design space exploration for deeply pipelined OpenCL kernels of the convolutional neural networks.…”
Section: Related Workmentioning
confidence: 99%