2009
DOI: 10.1007/bf03186531
|View full text |Cite
|
Sign up to set email alerts
|

Algorithms for accurate, validated and fast polynomial evaluation

Abstract: We survey a class of algorithms to evaluate polynomials with floating point coefficients and for computation performed with IEEE-754 floating point arithmetic. The principle is to apply, once or recursively, an error-free transformation of the polynomial evaluation with the Horner algorithm and to accurately sum the final decomposition. These compensated algorithms are as accurate as the Horner algorithm performed in K times the working precision, for K an arbitrary positive integer. We prove this accuracy pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
37
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 41 publications
(38 citation statements)
references
References 10 publications
1
37
0
Order By: Relevance
“…, x n } because the recursion that defines one monomial from another is trivial and the associated monomial basis is just the absolute value of the monomials. Therefore, with cond (p, x) we may obtain similar bounds as the ones obtained for the compensated Horner algorithm [12] and the compensated dot-product [19]. In the case of Chebyshev polynomials the difference between both condition numbers is that with cond (p, x) we consider not only the summation process, but also the generation recurrence that permits to obtain the polynomial basis T. Whereas with S T (p(x)) we just consider the final evaluation process.…”
Section: Analogous Chebyshev Polynomialsupporting
confidence: 57%
See 1 more Smart Citation
“…, x n } because the recursion that defines one monomial from another is trivial and the associated monomial basis is just the absolute value of the monomials. Therefore, with cond (p, x) we may obtain similar bounds as the ones obtained for the compensated Horner algorithm [12] and the compensated dot-product [19]. In the case of Chebyshev polynomials the difference between both condition numbers is that with cond (p, x) we consider not only the summation process, but also the generation recurrence that permits to obtain the polynomial basis T. Whereas with S T (p(x)) we just consider the final evaluation process.…”
Section: Analogous Chebyshev Polynomialsupporting
confidence: 57%
“…In order to increase the accuracy, Graillat, Langlois and Louvet [12][13][14] proposed a compensated Horner algorithm to evaluate the polynomial in monomial basis. Graillat also presented accurate floating-point product and exponentiation algorithms in [15].…”
Section: Introductionmentioning
confidence: 99%
“…This illustrates that these improvement techniques are not known enough outside the floatingpoint arithmetic community, or not sufficiently automated to be applied more systematically. For example, the programmer has to modify the source code by overloading floatingpoint types with double-double arithmetic [8] or, less easily, by compensating the floating-point operations with errorfree transformations (EFT) [7]. The latter transformations are difficult to implement without a preliminary manual step to define the modified algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Several techniques have been introduced to improve the accuracy of numerical algorithms, as for instance expansions [4], [23], compensations [7], [10], differential methods [14] or extended precision arithmetic using multiple-precision libraries [5], [8]. Nevertheless, bugs from numerical failures are numerous and well known [2], [18].…”
Section: Introductionmentioning
confidence: 99%
“…We present as Algorithm 11 a compensated algorithm for the Horner scheme. One can find a more detailed description of the compensated Horner scheme algorithm in [9,10]…”
Section: A Compensated Horner Scheme With Rounding To Nearestmentioning
confidence: 99%