2021
DOI: 10.1109/les.2020.3045165
|View full text |Cite
|
Sign up to set email alerts
|

DeBAM: Decoder-Based Approximate Multiplier for Low Power Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Approximate computing consists in relaxing the constraint of an exact computation in order to trade the quality of the result with speed, area and power consumption [1,2]. As fundamental arithmetic blocks in signal processing, approximate multipliers have been widely explored in the last few years [3][4][5][6][7][8][9][10][11][12][13][14][15]. Several approximate techniques have been proposed, such as column truncation [5,6], approximate compressors [7,8], the use of error-tolerant adders [9], input truncations [10], vertical and horizontal cut [12] and input encoding [13,14].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Approximate computing consists in relaxing the constraint of an exact computation in order to trade the quality of the result with speed, area and power consumption [1,2]. As fundamental arithmetic blocks in signal processing, approximate multipliers have been widely explored in the last few years [3][4][5][6][7][8][9][10][11][12][13][14][15]. Several approximate techniques have been proposed, such as column truncation [5,6], approximate compressors [7,8], the use of error-tolerant adders [9], input truncations [10], vertical and horizontal cut [12] and input encoding [13,14].…”
Section: Introductionmentioning
confidence: 99%
“…As fundamental arithmetic blocks in signal processing, approximate multipliers have been widely explored in the last few years [3][4][5][6][7][8][9][10][11][12][13][14][15]. Several approximate techniques have been proposed, such as column truncation [5,6], approximate compressors [7,8], the use of error-tolerant adders [9], input truncations [10], vertical and horizontal cut [12] and input encoding [13,14]. Generally, all these techniques exploit a simple error-correction technique, such as adding an error compensation constant to the approximate result in order to increase the accuracy [15].…”
Section: Introductionmentioning
confidence: 99%