2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM) 2020
DOI: 10.1109/fccm48280.2020.00071
|View full text |Cite
|
Sign up to set email alerts
|

High-Throughput DNN Inference with LogicNets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…Therefore the depth of the network remains unchanged. Other task-specific neural network frameworks optimized for FPGA mapping include LUTnet [11] and LogicNets [12]. These frameworks advance the concept of using the FPGA LUTs to implement 2-input XNORs between weights and activations, as used in binary networks, in order to exploit the capabilities of the multi-input LUTs.…”
Section: Model-specific Neural Architecturesmentioning
confidence: 99%
“…Therefore the depth of the network remains unchanged. Other task-specific neural network frameworks optimized for FPGA mapping include LUTnet [11] and LogicNets [12]. These frameworks advance the concept of using the FPGA LUTs to implement 2-input XNORs between weights and activations, as used in binary networks, in order to exploit the capabilities of the multi-input LUTs.…”
Section: Model-specific Neural Architecturesmentioning
confidence: 99%
“…DNNs designed and trained in this manner result in fast and efficient FPGA implementations that can fulfill the performance requirements for extreme-throughput applications. Extending on our abstract in [8], this paper makes the following contributions:…”
Section: Introductionmentioning
confidence: 97%