2016
DOI: 10.1016/j.suscom.2015.10.001
|View full text |Cite
|
Sign up to set email alerts
|

A fast, hybrid, power-efficient high-precision solver for large linear systems based on low-precision hardware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…However MEMs still have many properties that can be exploited for better and faster emulation, which should not be ignored only due to the early state of the method. These known advantages include the suitability for parallelization and, speedups and energy saving via approximated computing [Angerer et al, 2015] 10 3 ) 20.2, 6.9 3.42, 2.8 Table 3: Mean emulation errors corresponding to the Adliswil catchment dataset. MEM refers to an emulator in Machac et al [2016a], while MEM-fit to an emulator with the same proxy structure but parameters fitted to the data.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However MEMs still have many properties that can be exploited for better and faster emulation, which should not be ignored only due to the early state of the method. These known advantages include the suitability for parallelization and, speedups and energy saving via approximated computing [Angerer et al, 2015] 10 3 ) 20.2, 6.9 3.42, 2.8 Table 3: Mean emulation errors corresponding to the Adliswil catchment dataset. MEM refers to an emulator in Machac et al [2016a], while MEM-fit to an emulator with the same proxy structure but parameters fitted to the data.…”
Section: Discussionmentioning
confidence: 99%
“…The iteration in iv) avoids the ill-conditioned covariance matrices [Hansen, 1998] involved in GP when sampling rates are high [Steinke andSchölkopf, 2008, Reichert et al, 2011] and it is faster than direct matrix inversion in a serial implementation. The GP approach i) is better suited for parallelization, speedups and energy saving via approximated computing [Angerer et al, 2015].…”
Section: Introductionmentioning
confidence: 99%
“…Reducing the internal precision of numerical computation for a same final resolution is becoming an attractive solution to the processing of data whose quantity is increasing at exploding rates 8 and implying heavy expenses in power consumption. Recently proposed linear solvers have opted for such a reduction in the execution of standard algebraic methods.…”
Section: Context and Goalmentioning
confidence: 99%
“…Although many scientific applications use double-precision floating-point by default, this accuracy is not always required. Instead, low-and mixed-precision arithmetic has been very effective for the computation of inverse matrix roots [182], or solving systems of linear equations [392][393][394][395]. Driven by the growing popularity of artificial neural networks that can be evaluated and trained with reduced precision, hardware accelerators have gained improved low-precision computing support.…”
Section: B Approximate Computingmentioning
confidence: 99%