2020
DOI: 10.3390/electronics9040557
|View full text |Cite
|
Sign up to set email alerts
|

Survey on Approximate Computing and Its Intrinsic Fault Tolerance

Abstract: This work is a survey on approximate computing and its impact on fault tolerance, especially for safety-critical applications. It presents a multitude of approximation methodologies, which are typically applied at software, architecture, and circuit level. Those methodologies are discussed and compared on all their possible levels of implementations (some techniques are applied at more than one level). Approximation is also presented as a means to provide fault tolerance and high reliability: Traditional error… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 27 publications
(15 citation statements)
references
References 33 publications
(37 reference statements)
0
15
0
Order By: Relevance
“…The memory footprint can be directly reduced by changing the data representation of the parameters (e.g., weights, activations) of an ANN implementation. Reducing memory footprint can lead to a reduction in the energy consumption of the implementation since there is a decrease in the amount of data transferred from/to the memory [9]. Methods of reducing the floating-point precision, or even the bit-width used for data representation, can significantly reduce the energy consumption with a cost of degradation in the outcomes of an application [10].…”
Section: B Data Precision Reductionmentioning
confidence: 99%
“…The memory footprint can be directly reduced by changing the data representation of the parameters (e.g., weights, activations) of an ANN implementation. Reducing memory footprint can lead to a reduction in the energy consumption of the implementation since there is a decrease in the amount of data transferred from/to the memory [9]. Methods of reducing the floating-point precision, or even the bit-width used for data representation, can significantly reduce the energy consumption with a cost of degradation in the outcomes of an application [10].…”
Section: B Data Precision Reductionmentioning
confidence: 99%
“…Approximate computing techniques can be applied at different levels, where the approach can target software, hardware, or architectural level [11], [12]. Among all the possible techniques, in this work, we consider the data precision reduction, a technique that can be implemented both at the software and architectural levels.…”
Section: Contextmentioning
confidence: 99%
“…Adequately trained models are more error resilient with a lessened need for the accuracy of the results and computation, making them perfect candidates for approximate computing. Approximate computing is a new paradigm where an acceptable error is induced in the computing to achieve more energy-efficient processing [ 28 , 29 , 30 , 31 , 32 , 33 ]. It has been introduced at different system levels [ 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ], and a large number of approximate arithmetic circuits have been designed to save chip area and energy [ 35 , 38 , 46 , 47 , 48 , 49 , 50 , 51 ].…”
Section: Introductionmentioning
confidence: 99%