2020
DOI: 10.48550/arxiv.2007.06674
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(20 citation statements)
references
References 0 publications
0
20
0
Order By: Relevance
“…However, this may well change: first, when high accuracy is needed, n may need to be large enough to enter the regime E N E D . Second, and more nontrivially, the breakeven point where E D ≈ E N = O( κ(A)) depends on the working precision , and a compelling line of recent research is to use low-precision arithmetic for efficiency [1] in scientific computing and data science applications. In such situation, E N would start dominating for a modest discretization size, making FOP an important technique to retain good solutions.…”
Section: Mathematical Basics 21 Numerical Solution Of Operator Equationsmentioning
confidence: 99%
“…However, this may well change: first, when high accuracy is needed, n may need to be large enough to enter the regime E N E D . Second, and more nontrivially, the breakeven point where E D ≈ E N = O( κ(A)) depends on the working precision , and a compelling line of recent research is to use low-precision arithmetic for efficiency [1] in scientific computing and data science applications. In such situation, E N would start dominating for a modest discretization size, making FOP an important technique to retain good solutions.…”
Section: Mathematical Basics 21 Numerical Solution Of Operator Equationsmentioning
confidence: 99%
“…While modern GPUs provide higher memory bandwidth than FPGA accelerator cards (from 549 GB/s of the Nvidia P100 to the 1555 GB/s of the Nvidia A100), existing SpMV implementations are often unable to utilize the available bandwidth fully [11]. Moreover, optimizations for reduced precision data-types are currently limited to half-precision floating-point, with no support for reduced-precision fixed-point arithmetic [12], [13] although recent work has explored heuristics to mix single and double precision floating-point arithmetic [14].…”
Section: Related Workmentioning
confidence: 99%
“…The computational cost of the function evaluations F (y (j) ) can be considerable, especially in cases where it necessitates an implicit solve. The use of mixed precision approach has been implemented for other numerical methods [1,5] seems to be a promising approach. Lowering the precision on these computations, either by storing F (y (j) ) as a single precision variable rather than a double precision one, or by raising the tolerance of the implicit solver, can speed up the computation significantly.…”
Section: Introduction Consider the Ordinary Differential Equation (Ode)mentioning
confidence: 99%