From Variability Tolerance to Approximate Computing in Parallel Integrated Architectures and Accelerators 2017
DOI: 10.1007/978-3-319-53768-9_10
|View full text |Cite
|
Sign up to set email alerts
|

An Approximation Workflow for Exploiting Data-Level Parallelism in FPGA Acceleration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…In [16], a Bayesian network learns the cost and error models of an optimization problem offline and determines the quality control knob by solving this optimization problem. Rahimi et al [17] selectively reduce and dynamically tune the precision, subjecting to a statistical quality knob; consequently, an approximator within the fixed area budget can accommodate more parallel approximate kernels. These works strive to maximize the invocation of a single approximation scheme.…”
Section: A Model Based Quality Control For Approximate Computingmentioning
confidence: 99%
See 1 more Smart Citation
“…In [16], a Bayesian network learns the cost and error models of an optimization problem offline and determines the quality control knob by solving this optimization problem. Rahimi et al [17] selectively reduce and dynamically tune the precision, subjecting to a statistical quality knob; consequently, an approximator within the fixed area budget can accommodate more parallel approximate kernels. These works strive to maximize the invocation of a single approximation scheme.…”
Section: A Model Based Quality Control For Approximate Computingmentioning
confidence: 99%
“…These works then advocate a rollback or online tuning mechanism to guarantee the computation reliability. On the other hand, a proactive strategy is to predict whether the load is safe-to-approximate or not using statistical and learning models [12], [14]- [17].…”
Section: Introductionmentioning
confidence: 99%