2010
DOI: 10.1109/tvlsi.2009.2025167
|View full text |Cite
|
Sign up to set email alerts
|

Variable-Latency Floating-Point Multipliers for Low-Power Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 10 publications
0
12
0
Order By: Relevance
“…[38] proposed a variable-latency floating-point multiplier architecture which was compliant with IEEE 754-1985 and was deemed suitable for low-power applications (see Figure XII). The multiplier architecture of [39] splits the mantissa multiplier into the upper and lower components, and predicts the, sticky bit, carry bit, and mantissa product from the upper part. In the event that the prediction is correct the computation of the lower part is disabled and the rounding operation is simplified, hence allowing the system to consume less power [38] The system of [39] computed the biased exponent by the summation of both biased exponents of inputs using binary adders followed by subtraction of the bias.…”
Section: Double Precision Floating-point Multiplicationmentioning
confidence: 99%
See 1 more Smart Citation
“…[38] proposed a variable-latency floating-point multiplier architecture which was compliant with IEEE 754-1985 and was deemed suitable for low-power applications (see Figure XII). The multiplier architecture of [39] splits the mantissa multiplier into the upper and lower components, and predicts the, sticky bit, carry bit, and mantissa product from the upper part. In the event that the prediction is correct the computation of the lower part is disabled and the rounding operation is simplified, hence allowing the system to consume less power [38] The system of [39] computed the biased exponent by the summation of both biased exponents of inputs using binary adders followed by subtraction of the bias.…”
Section: Double Precision Floating-point Multiplicationmentioning
confidence: 99%
“…The multiplier architecture of [39] splits the mantissa multiplier into the upper and lower components, and predicts the, sticky bit, carry bit, and mantissa product from the upper part. In the event that the prediction is correct the computation of the lower part is disabled and the rounding operation is simplified, hence allowing the system to consume less power [38] The system of [39] computed the biased exponent by the summation of both biased exponents of inputs using binary adders followed by subtraction of the bias. Source: Data from [39] The proposed multiplier was implemented in Verilog and synthesized using Synopsys Design Compiler with TSMC CMOS standard cell library.…”
Section: Double Precision Floating-point Multiplicationmentioning
confidence: 99%
“…In the IEEE-754 floating point format [1] there are two precisions are available, one is the single precision and another one is the double precision. In the single precision there are 32-bits and in double precision there are 64-bits are available.…”
Section: Ieee-754 Floating Point Formatmentioning
confidence: 99%
“…There are a few previous works that leverage quality as a design knob for reducing power consumptions in floating point multipliers [5][6] [8][9] [10]. Most of these works use intuitive bit truncations and voltage scaling that results in a relatively large percentage of power/energy savings.…”
Section: Related Workmentioning
confidence: 99%