2019
DOI: 10.2197/ipsjtsldm.12.22
|View full text |Cite
|
Sign up to set email alerts
|

Parallelism-flexible Convolution Core for Sparse Convolutional Neural Networks on FPGA

Abstract: The performance of recent CNN accelerators falls behind their peak performance because they fail to maximize parallel computation in every convolutional layer from the parallelism that varies throughout the CNN. Furthermore, the exploitation of multiple parallelisms may reduce calculation-skip ability. This paper proposes a convolution core for sparse CNN that leverages multiple types of parallelism and weight sparsity efficiently to achieve high performance. It alternates dataflow and scheduling of parallel c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…e convolution layer is the core part of the whole convolution neural network [15]. e shallow convolution layer can only extract low-level features such as edges, lines, and angles.…”
Section: Convolution Layermentioning
confidence: 99%
“…e convolution layer is the core part of the whole convolution neural network [15]. e shallow convolution layer can only extract low-level features such as edges, lines, and angles.…”
Section: Convolution Layermentioning
confidence: 99%