2019 Conference on Design and Architectures for Signal and Image Processing (DASIP) 2019
DOI: 10.1109/dasip48288.2019.9049213
|View full text |Cite
|
Sign up to set email alerts
|

CNN hardware acceleration on a low-power and low-cost APSoC

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…This is the smallest and most inexpensive device from Zynq-7000 SoC family. In [24], Paolo Meloni et al presented a CNN inference accelerator for compact and cost-optimized devices. This implementation uses fixed-point to process light-weight CNN architectures with a power efficiency between 2.49 to 2.98 GOPS/s/W.…”
Section: Low-powermentioning
confidence: 99%
“…This is the smallest and most inexpensive device from Zynq-7000 SoC family. In [24], Paolo Meloni et al presented a CNN inference accelerator for compact and cost-optimized devices. This implementation uses fixed-point to process light-weight CNN architectures with a power efficiency between 2.49 to 2.98 GOPS/s/W.…”
Section: Low-powermentioning
confidence: 99%
“…Traditional CPU designs are challenged in meeting the computational requirement of big data analytics applications. Accelerators for such applications on Field Programmable Logic Arrays (FPGAs) have become an attractive solution due to their massive parallelism, low power consumption, and costefficiency [1] [2]. Unfortunately, effective external DRAM memory bandwidth and access latency have become the bottleneck in such accelerators [3] [4].…”
Section: Introductionmentioning
confidence: 99%