2020 IEEE 14th Dallas Circuits and Systems Conference (DCAS) 2020
DOI: 10.1109/dcas51144.2020.9330667
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Modeling, Optimizing, and Implementing DNNs on FPGA Using HLS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…Embedded Devices: We compare the power efficiency of COSMO with the state-of-the-art implementations of DNN inference on NVIDIA Tegra Jetson X1 GPU [65], FPGA [78], and Edge-TPU [3]. For the inference task on AlexNet, Tegra X1 (FPGA) achieves a power efficiency of 45 images/s/W (16 images/s/W), while COSMO achieves 506 images/s/W.…”
Section: Cosmo Vs Previous Acceleratorsmentioning
confidence: 99%
“…Embedded Devices: We compare the power efficiency of COSMO with the state-of-the-art implementations of DNN inference on NVIDIA Tegra Jetson X1 GPU [65], FPGA [78], and Edge-TPU [3]. For the inference task on AlexNet, Tegra X1 (FPGA) achieves a power efficiency of 45 images/s/W (16 images/s/W), while COSMO achieves 506 images/s/W.…”
Section: Cosmo Vs Previous Acceleratorsmentioning
confidence: 99%