2023
DOI: 10.1145/3547141
|View full text |Cite
|
Sign up to set email alerts
|

On the RTL Implementation of FINN Matrix Vector Unit

Abstract: FPGA-based accelerators are becoming increasingly popular for deep neural network inference due to their ability to scale performance with increasing degree of specialization with dataflow architectures or custom data type precision. In order to reduce the barrier for software engineers and data scientists to adopt FPGAs, C++- and OpenCL-based design entries with high-level synthesis (HLS) have been introduced. They provide higher abstraction compared to register-transfer level (RTL)-based design. HLS offers f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…or third-party libraries such as Larq ("Larq | Binarized Neural Network development," 2022.) and FINN (Alam et al, 2022) respectively.…”
Section: Quantization / Bnnsmentioning
confidence: 99%
“…or third-party libraries such as Larq ("Larq | Binarized Neural Network development," 2022.) and FINN (Alam et al, 2022) respectively.…”
Section: Quantization / Bnnsmentioning
confidence: 99%
“…Such an approach is not completely novel [6,7]. However, previous work mostly focused on using high-level synthesis (HLS) tools, which can produce less optimal results [8]. We present an alternative approach of developing such hardware in the Chisel Hardware Construction Language (HCL) [9].…”
Section: Introductionmentioning
confidence: 99%