2018 28th International Conference on Field Programmable Logic and Applications (FPL) 2018
DOI: 10.1109/fpl.2018.00024
|View full text |Cite
|
Sign up to set email alerts
|

FINN-L: Library Extensions and Design Trade-Off Analysis for Variable Precision LSTM Networks on FPGAs

Abstract: It is well known that many types of artificial neural networks, including recurrent networks, can achieve a high classification accuracy even with low-precision weights and activations. The reduction in precision generally yields much more efficient hardware implementations in regards to hardware cost, memory requirements, energy, and achievable throughput. In this paper, we present the first systematic exploration of this design space as a function of precision for Bidirectional Long Short-Term Memory (BiLSTM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
59
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(63 citation statements)
references
References 37 publications
1
59
0
1
Order By: Relevance
“…These types of architectures are well suited for constrained resource designs where high throughput is not needed. The feedback nature of layer processors also make them well suited for RNNs, as seen in [69].…”
Section: Architecturesmentioning
confidence: 99%
See 2 more Smart Citations
“…These types of architectures are well suited for constrained resource designs where high throughput is not needed. The feedback nature of layer processors also make them well suited for RNNs, as seen in [69].…”
Section: Architecturesmentioning
confidence: 99%
“…This type of work flow is called High Level Synthesis (HLS). HLS has been a major component of the research done with BNNs on FPGAs [35,[38][39][40]54,56,63,68,69,72].…”
Section: High Level Synthesismentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, other works, such as [10], [11], extended FINN in the past. The former proposes an extension to the original version of the framework with support for arbitrary precision and more flexibility in the end architecture and target platforms, including hardware cost estimation for given devices.…”
Section: Related Workmentioning
confidence: 99%
“…There has been previous work [7,8,9,10] with FPGA based implementations such that all the weights are stored in the on-chip memory, but this is expensive and limits the size of models that can be deployed. When the RNN model is too large that the weights need to be stored on an external DRAM, it is not efficient because the fetched weights are typically used only once for each output computation.…”
Section: Introductionmentioning
confidence: 99%