1995
DOI: 10.1006/jcom.1995.1001
|View full text |Cite
|
Sign up to set email alerts
|

Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems

Abstract: We study multivariate tensor product problems in the worst case and average case settings. They are de ned on functions of d variables. For arbitrary d, w e p r o vide explicit upper bounds on the costs of algorithms which compute an "-approximation to the solution. The cost bounds are of the form (c(d) + 2) 1 2 + 3 ln 1=" d ; 1 4 (d;1) 1 " 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and i 's do not depend on d they are determined by the properties of the problem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
235
0
3

Year Published

2003
2003
2018
2018

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 295 publications
(241 citation statements)
references
References 28 publications
3
235
0
3
Order By: Relevance
“…for f ∈ W r d [38]. We see that the convergence rate depends only weakly on the dimension but strongly on the smoothness r. However, the conventional sparse grid method treats all dimensions equally (because this is also true for unit simplex) and thus the dependence of the quadrature error on the dimension in its logarithmic term will cause problems for high-dimensional integrands.…”
Section: Sparse Gridsmentioning
confidence: 94%
“…for f ∈ W r d [38]. We see that the convergence rate depends only weakly on the dimension but strongly on the smoothness r. However, the conventional sparse grid method treats all dimensions equally (because this is also true for unit simplex) and thus the dependence of the quadrature error on the dimension in its logarithmic term will cause problems for high-dimensional integrands.…”
Section: Sparse Gridsmentioning
confidence: 94%
“…2.30 can be written as [82] A (w, n) = w+1≤|i|≤w+n (−1) Nonlinear growth rules are used for fully nested rules (e.g., Clenshaw-Curtis is closed fully nested and GaussPatterson is open fully nested), and linear growth rules are best for standard Gauss rules that take advantage of, at most, "weak" nesting (e.g., reuse of the center point).…”
Section: Smolyak Sparse Gridsmentioning
confidence: 99%
“…, d as its components. It is known that the error of Smolyak's algorithm is of order n −q d times a logarithmic factor log n raised to a power that is linear in d − 1; see, e.g., [27,Remark 2] and the literature cited therein. (Further details on various aspects of Smolyak's algorithm can be found in [6,7,11,12,21].)…”
Section: Nonuniversal Algorithmsmentioning
confidence: 99%