18th International Parallel and Distributed Processing Symposium, 2004. Proceedings.
DOI: 10.1109/ipdps.2004.1303135
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of high-performance floating-point arithmetic on FPGAs

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
69
0
2

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(74 citation statements)
references
References 2 publications
1
69
0
2
Order By: Relevance
“…Hosting such applications with high precision requirements on FPGAs is an active research area e.g. [15]. Nevertheless, integer based arithmetic requires significantly less area to implement and run significantly faster than IEEE formats [13], and some FPGA-targeting compilers do not support floating point operations.…”
Section: Computational Transformationmentioning
confidence: 99%
“…Hosting such applications with high precision requirements on FPGAs is an active research area e.g. [15]. Nevertheless, integer based arithmetic requires significantly less area to implement and run significantly faster than IEEE formats [13], and some FPGA-targeting compilers do not support floating point operations.…”
Section: Computational Transformationmentioning
confidence: 99%
“…An example of the simplest double-precision FPU from the Tensilica design library requires on the order of 150,000 gates [36]. Even though FPU units support multiplication as well as addition, and therefore would be partially unused in a distributed summation, floating point multiplication is less complicated to implement than addition [37]. Verification is non-trivial for even a circuit of this size and complexity, and this implementation is among the simplest available.…”
Section: B Applicability To Network Operationsmentioning
confidence: 99%
“…Moreover, multiple tradeoffs between latency and area can be exploited which has already been extensively studied for floating point formats [7,17,21]. There exist also parameterized IP cores which offer particularly efficient implementations for a given architecture.…”
Section: Floating Point Numbers On Fpgasmentioning
confidence: 99%