Proceedings of the Neuromorphic Computing Symposium 2017
DOI: 10.1145/3183584.3183619
|View full text |Cite
|
Sign up to set email alerts
|

FPGA based cellular neural network optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…The shared data are transferred from one computation unit to the next in a chain mode. Liu et al 27 adopted the similar design. The above accelerators can only accelerate specific neural networks, which is lack of versatility.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The shared data are transferred from one computation unit to the next in a chain mode. Liu et al 27 adopted the similar design. The above accelerators can only accelerate specific neural networks, which is lack of versatility.…”
Section: Related Workmentioning
confidence: 99%
“…Studies 20–22 design the computation units with different data precision and works 23,24 improve the frequency of Digital Signal Processing (DSP) units. Researchers 25–27 accelerate the inference of Convolution Neural Network (CNN) by improving the parallelism of computation and dataflow. Studies 28–31 explore deploying DNN accelerators on one or more FPGAs with system optimization.…”
Section: Introductionmentioning
confidence: 99%