2020
DOI: 10.1137/19m1279459
|View full text |Cite
|
Sign up to set email alerts
|

Near-Optimal Sampling Strategies for Multivariate Function Approximation on General Domains

Abstract: Many problems arising in computational science and engineering can be described in terms of approximating a smooth function of d variables, defined over an unknown domain of interest Ω ⊂ R d , from sample data. Here both the underlying dimensionality of the problem (in the case d 1) as well as the lack of domain knowledge-with Ω potentially irregular and/or disconnected-are confounding factors for sampling-based methods. Naïve approaches for such problems often lead to wasted samples and inefficient approximat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(29 citation statements)
references
References 39 publications
0
27
0
Order By: Relevance
“…and µ is the measure given by (3.2). Moreover, the coefficients c of f Υ,Λ, satisfy 5) and the absolute ( 2 , L 2 )-condition number of the reconstruction operator L Υ,Λ, :…”
Section: Theorem 22 (Accuracy and Conditioning) There Exists A Consmentioning
confidence: 99%
See 1 more Smart Citation
“…and µ is the measure given by (3.2). Moreover, the coefficients c of f Υ,Λ, satisfy 5) and the absolute ( 2 , L 2 )-condition number of the reconstruction operator L Υ,Λ, :…”
Section: Theorem 22 (Accuracy and Conditioning) There Exists A Consmentioning
confidence: 99%
“…One solution to this problem is to employ discrete measures, supported over a fine grid that suitably fills Ω. This strategy, which uses ideas of [25], has been recently developed in [5,36]. Yet this procedure requires the domain Ω to be known in advance, and requires a fine grid to first be generated.…”
Section: Conclusion and Challengesmentioning
confidence: 99%
“…. , L n ), either by performing a first discretization of D with a large number of points, or by using a hierarchical method on a sequence of nested spaces [1,3,5,8,15,16]. These additional steps have complexities O(K n n 2 ) and O(n 4 ) respectively, where K n is the maximal value of the inverse Christoffel function n j=1 |L j | 2 which might grow more than linearly with n for certain choices of spaces V n .…”
Section: Computational Aspectsmentioning
confidence: 99%
“…As a final remark, let us to emphasize that although the results presented in our paper are mainly theorical and not practically satisfactory, due both to the computational complexity of the sparsification, and to the high values of the numerical constants C and K in Theorem 1, they provide some intuitive justification to the boosted least-squares methods presented in [8], which consist in removing points from the initial sample as long as the corresponding Gram matrix G X remains well conditioned. For instance, Lemma 4 allows to keep splitting the sample even after L steps, if one still has a framing 1 2 I G X 3 2 I and a sufficiently large ration |X| n . Nevertheless, it would be of much interest to find a randomized version of [4] giving a bound of the form (50), since this would give algorithmic tractability, smaller values for C and K, and the possibility to balance these constants in Theorem 1.…”
Section: Computational Aspectsmentioning
confidence: 99%
See 1 more Smart Citation