2020
DOI: 10.48550/arxiv.2005.03775
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimizing Temporal Convolutional Network inference on FPGA-based accelerators

Abstract: Convolutional Neural Networks are extensively used in a wide range of applications, commonly including computer vision tasks like image and video classification, recognition and segmentation. Recent research results demonstrate that multilayer (deep) network involving mono-dimensional convolutions and dilation can be effectively used in time series and sequences classification and segmentation, as well as in tasks involving sequence modelling. These structures, commonly referred to as Temporal Convolutional Ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
(49 reference statements)
0
1
0
Order By: Relevance
“…For sparse and hybrid DNNs, Huang et al [89] propose a configurable inference engine capable of processing different sizes of sparse DNNs, while HybridDNN [90] has a hybrid architecture composed of spatial/Winograd convolution processing elements. Additionally, Carreras et al [91] present an enriched architectural template supporting efficient TCNs, together with an algorithm for optimal execution/scheduling of data-transfers to boost the implementation performance.…”
Section: Fpga-basedmentioning
confidence: 99%
“…For sparse and hybrid DNNs, Huang et al [89] propose a configurable inference engine capable of processing different sizes of sparse DNNs, while HybridDNN [90] has a hybrid architecture composed of spatial/Winograd convolution processing elements. Additionally, Carreras et al [91] present an enriched architectural template supporting efficient TCNs, together with an algorithm for optimal execution/scheduling of data-transfers to boost the implementation performance.…”
Section: Fpga-basedmentioning
confidence: 99%