2023
DOI: 10.21203/rs.3.rs-2679691/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Design optimization for high-performance computing using FPGA

Abstract: Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs) have been used for accelerating computations in several domains because of their unique combination of flexibility, performance, and power efficiency. However, FPGAs have not been widely used for high-performance computing, primarily because of their programming complexity and difficulties in optimizing performance. We optimize Tensil AI's open-source inference accelerator for maximum performance using ResNet20 trained on CIFAR in this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…One major difference between Tensil AI and Nengo is their focus. Tensil AI aims to accelerate machine learning inference, while Nengo is geared towards building large-scale neural networks [22] [7] [9] [13].…”
Section: Open-source ML Inference Acceleratorsmentioning
confidence: 99%
“…One major difference between Tensil AI and Nengo is their focus. Tensil AI aims to accelerate machine learning inference, while Nengo is geared towards building large-scale neural networks [22] [7] [9] [13].…”
Section: Open-source ML Inference Acceleratorsmentioning
confidence: 99%