Proceedings of the 2022 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 2022
DOI: 10.1145/3490422.3502360
|View full text |Cite
|
Sign up to set email alerts
|

Logic Shrinkage

Abstract: FPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this paper, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Previous works on soft-logic computing frequently lack the scalability to support the model sizes necessary for ImageNet classification [2,28]. Among the existing solutions, only LUTNet [30] and Logic Shrinkage [31] have reported results on this dataset, albeit with certain constraints.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Previous works on soft-logic computing frequently lack the scalability to support the model sizes necessary for ImageNet classification [2,28]. Among the existing solutions, only LUTNet [30] and Logic Shrinkage [31] have reported results on this dataset, albeit with certain constraints.…”
Section: Resultsmentioning
confidence: 99%
“…Binary Neural Network (1b) [30,31] customises and constrains the training process to achieve LUT compatibility, TLMAC derives optimised LUT initialisations from state-of-the-art quantised models directly. function as both a compute engine and traditional memory [3,7,32].…”
Section: Routing Optimisationmentioning
confidence: 99%
See 2 more Smart Citations
“…FPGAs are potential solutions to accelerate LLM inference and explore the benefits brought by model compression, which has been proven in previous deep learning models [21,22,39,43,55]. However, efficient LLM inference on FPGAs needs to solve the following challenges (Fig.…”
Section: Costmentioning
confidence: 99%