Proceedings of the 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2014
DOI: 10.1145/2555243.2555265
|View full text |Cite
|
Sign up to set email alerts
|

Efficient search for inputs causing high floating-point errors

Abstract: Tools for floating-point error estimation are fundamental to program understanding and optimization. In this paper, we focus on tools for determining the input settings to a floating point routine that maximizes its result error. Such tools can help support activities such as precision allocation, performance optimization, and auto-tuning. We benchmark current abstraction-based precision analysis methods, and show that they often do not work at scale, or generate highly pessimistic error estimates, often cause… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 63 publications
(36 citation statements)
references
References 35 publications
0
36
0
Order By: Relevance
“…8 A program's error at a point is then the difference between the exactly computed floatingpoint prefix and the answer computed using floating point semantics. Programs are compared by their average bits of error over all valid inputs.…”
Section: Sampling Pointsmentioning
confidence: 99%
See 1 more Smart Citation
“…8 A program's error at a point is then the difference between the exactly computed floatingpoint prefix and the answer computed using floating point semantics. Programs are compared by their average bits of error over all valid inputs.…”
Section: Sampling Pointsmentioning
confidence: 99%
“…Tools like Rosa could be used to prove that Herbie's output meets an application-specific accuracy specification. Several analysis tools have also been developed: Fluctuat uses abstract interpretation to statically track the error of a floating-point program [18], FPDebug uses a dynamic execution with shadow variables in higher precision [5], and CGRS [8] uses evoluationary search to find inputs that cause high floating-point error.…”
Section: Verification Of Numerical Codementioning
confidence: 99%
“…Our group has started working on floating-point round-off error analysis through heuristic search [8] as well as through rigorous analysis using a new approach based on Symbolic Taylor Forms [43]. Another recent focus has been on floatingpoint divergence detection, where we are attempting to determine whether (and when) a piece of code can branch differently following precision reallocation.…”
Section: Looming Issuesmentioning
confidence: 99%
“…A search procedure for producing inputs that maximize the relative error between two floating-point kernels which is similar to simulated annealing appears in [5]. A brute-force approach to replacing double-precision instructions with their singleprecision equivalents appears in [21], and a randomized technique for producing floating-point narrowing conversions at the source code level is discussed in [26].…”
Section: Related Workmentioning
confidence: 99%
“…With respect to correctness, STOKE uses test case data to generate customized optimizations which are specialized to user-specified input ranges. These optimizations may be incorrect in general, but perfectly acceptable given the constraints on inputs described by the user or generated by a technique such as [5].…”
Section: Related Workmentioning
confidence: 99%