Abstract. We consider the problem of reconstructing an unknown bounded function u defined on a domain X ⊂ R d from noiseless or noisy samples of u at n points (x i )i=1,...,n. We measure the reconstruction error in a norm L 2 (X, dρ) for some given probability measure dρ. Given a linear space Vm with dim(Vm) = m ≤ n, we study in general terms the weighted least-squares approximations from the spaces Vm based on independent random samples. It is well known that least-squares approximations can be inaccurate and unstable when m is too close to n, even in the noiseless case. Recent results from [6,7] have shown the interest of using weighted least squares for reducing the number n of samples that is needed to achieve an accuracy comparable to that of best approximation in Vm, compared to standard least squares as studied in [4]. The contribution of the present paper is twofold. From the theoretical perspective, we establish results in expectation and in probability for weighted least squares in general approximation spaces Vm. These results show that for an optimal choice of sampling measure dµ and weight w, which depends on the space Vm and on the measure dρ, stability and optimal accuracy are achieved under the mild condition that n scales linearly with m up to an additional logarithmic factor. In contrast to [4], the present analysis covers cases where the function u and its approximants from Vm are unbounded, which might occur for instance in the relevant case where X = R d and dρ is the Gaussian measure. From the numerical perspective, we propose a sampling method which allows one to generate independent and identically distributed samples from the optimal measure dµ. This method becomes of interest in the multivariate setting where dµ is generally not of tensor product type. We illustrate this for particular examples of approximation spaces Vm of polynomial type, where the domain X is allowed to be unbounded and high or even infinite dimensional, motivated by certain applications to parametric and stochastic PDEs.Math. classification. 41A10, 41A25, 41A65, 62E17, 93E24.
We analyze the problem of approximating a multivariate function by discrete least-squares projection on a polynomial space starting from random, noise-free observations. An area of possible application of such technique is uncertainty quantification for computational models. We prove an optimal convergence estimate, up to a logarithmic factor, in the univariate case, when the observation points are sampled in a bounded domain from a probability density function bounded away from zero and bounded from above, provided the number of samples scales quadratically with the dimension of the polynomial space. Optimality is meant in the sense that the weighted L 2 norm of the error committed by the random discrete projection is bounded with high probability from above by the best L ∞ error achievable in the given polynomial space, up to logarithmic factors. Several numerical tests are presented in both the univariate and multivariate cases, confirming our theoretical estimates. The numerical tests also clarify how the convergence rate depends on the number of sampling points, on the polynomial degree, and on the smoothness of the target function. Keywords Approximation theory • Error analysis • Multivariate polynomial approximation • Nonparametric regression • Noise-free data • Generalized polynomial chaos • Point collocation Communicated by Albert Cohen.
We consider the linear elliptic equation −div(a∇u) = f on some bounded domain D, where a has the form a = exp(b) with b a random function defined as b(y) = j≥1 y j ψ j where y = (y j ) ∈ R N are i.i.d. standard scalar Gaussian variables and (ψ j ) j≥1 is a given sequence of functions in L ∞ (D). We study the summability properties of Hermite-type expansions of the solution map y → u(y) ∈ V := H 1 0 (D), that is, expansions of the form u(y) = ν∈F u ν H ν (y), where H ν (y) = j≥1 H νj (y j ) are the tensorized Hermite polynomials indexed by the set F of finitely supported sequences of nonnegative integers. Previous results [19] have demonstrated that, for any 0 < p ≤ 1, the ℓ p summability of the sequence (j ψ j L ∞ ) j≥1 implies ℓ p summability of the sequence ( u ν V ) ν∈F . Such results ensure convergence rates n −s with s = 1 p − 1 2 of polynomial approximations obtained by best n-term truncation of Hermite series, where the error is measured in the mean-square sense, that is, in L 2 (R N , V, γ), where γ is the infinitedimensional Gaussian measure. In this paper we considerably improve these results by providing sufficient conditions for the ℓ p summability of ( u ν V ) ν∈F expressed in terms of the pointwise summability properties of the sequence (|ψ j |) j≥1 . This leads to a refined analysis which takes into account the amount of overlap between the supports of the ψ j . For instance, in the case of disjoint supports, our results imply that, for all 0 < p < 2 the ℓ p summability of ( u ν V ) ν∈F follows from the weaker assumption that ( ψ j L ∞ ) j≥1 is ℓ q summable for q := 2p 2−p > p. In the case of arbitrary supports, our results imply that the ℓ p summability of ( u ν V ) ν∈F follows from the ℓ p summability of (j β ψ j L ∞ ) j≥1 for some β > 1 2 , which still represents an improvement over the condition in [19]. We also explore intermediate cases of functions with local yet overlapping supports, such as wavelet bases. One interesting observation following from our analysis is that for certain relevant examples, the use of the Karhunen-Loève basis for the representation of b might be suboptimal compared to other representations, in terms of the resulting summability properties of ( u ν V ) ν∈F . While we focus on the diffusion equation, our analysis applies to other type of linear PDEs with similar lognormal dependence in the coefficients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.