We consider the approximate recovery of multivariate periodic functions from a discrete set of function values taken on a rank-s integration lattice. The main result is the fact that any (non-)linear reconstruction algorithm taking function values on a rank-s lattice of size M has a dimension-independent lower bound of 2 −(α+1)/2 M −α/2 when considering the optimal worst-case error with respect to function spaces of (hybrid) mixed smoothness α > 0 on the d-torus. We complement this lower bound with upper bounds that coincide up to logarithmic terms. These upper bounds are obtained by a detailed analysis of a rank-1 lattice sampling strategy, where the rank-1 lattices are constructed by a componentby-component (CBC) method. This improves on earlier results obtained in [25] and [27]. The lattice (group) structure allows for an efficient approximation of the underlying function from its sampled values using a single one-dimensional fast Fourier transform. This is one reason why these algorithms keep attracting significant interest. We compare our results to recent (almost) optimal methods based upon samples on sparse grids. This paper deals with the reconstruction of multivariate periodic functions from a discrete set of M function values along rank-1 lattices. Such lattices are widely used for the efficient numerical integration of multivariate periodic functions since the 1950ies [1,21,29,35,6] and represent a well-distributed set of points in [0, 1) d . A rank-1 lattice with M ∈ N points and generating vector z ∈ Z d is given byIn this paper we will show that restricting the set of available discrete information to samples from a rank-s lattice, cf.[35], seriously affects the rate of convergence of a corresponding worst-case error with respect to classes of functions with (hybrid) mixed smoothness α > 0.To be more precise, for any (possibly nonlinear) reconstruction procedure from sampled values along rank-s lattices we can find a function in the periodic Sobolev spaces H α mix such that the L 2 (T d ) mean square error is at least 2 −(α+1)/2 M −α/2 . In contrast to that, it has been proved recently, that the sampling recovery from (energy) sparse grids leads to much better convergence rates, namely M −α in the main term, see [33,41,4].Subsequently, we study particular reconstructing algorithms, which are based on the naive approach of approximating the potentially "largest" Fourier coefficients (integrals) with the same rank-1 lattice rule. Despite the lacking asymptotical optimality, recovery from so-called reconstructing rank-1 lattices, cf. [15,18], has some striking advantages.First, the matrix of the underlying linear system of equations has orthogonal columns due to the group structure [2] and the reconstructing property of the used rank-1 lattices. Consequently, the computation is stable, cf. [17,15].Second, the CBC strategy [14, Tab. 3.1] provides a search method for a reconstructing rank-1 lattice which allows for the computation of the approximate Fourier coefficients belonging to frequencies lying on ...
We investigate the rate of convergence of linear sampling numbers of the embedding H α,β (T d ) ֒→ H γ (T d ). Here α governs the mixed smoothness and β the isotropic smoothness in the space H α,β (T d ) of hybrid smoothness, whereas H γ (T d ) denotes the isotropic Sobolev space. If γ > β we obtain sharp polynomial decay rates for the first embedding realized by sampling operators based on "energy-norm based sparse grids" for the classical trigonometric interpolation. This complements earlier work by Griebel, Knapek and Dũng, Ullrich, where general linear approximations have been considered. In addition, we study the embedding H α mix (T d ) ֒→ H γ mix (T d ) and achieve optimality for Smolyak's algorithm applied to the classical trigonometric interpolation. This can be applied to investigate the sampling numbers for the embedding H α mix (T d ) ֒→ L q (T d ) for 2 < q ≤ ∞ where again Smolyak's algorithm yields the optimal order. The precise decay rates for the sampling numbers in the mentioned situations always coincide with those for the approximation numbers, except probably in the limiting situation β = γ (including the embedding into L 2 (T d )). The best what we could prove there is a (probably) non-sharp results with a logarithmic gap between lower and upper bound. an isotropic Sobolev space H γ (T d ). This is motivated by the use of Galerkin methods for the H 1 (T d )-approximation of the solution of general elliptic variational problems see, e.g., [1,2,11,10,12,24]. The present paper can be seen as a continuation of [9], where finite-rank approximations in the sense of approximation numbers were studied. The latter are defined aswhere X, Y are Banach spaces and T ∈ L(X, Y ), where L(X, Y ) denotes the space of all bounded linear operators T : X → Y . In contrast to that, we restrict the class of admissible algorithms even further in this paper and deal with the problem of the optimal recovery of H α,β -functions from only a finite number of function values, where the optimality in the worstcase setting is commonly measured in terms of linear sampling numbers
In this paper we consider the L q -approximation of multivariate periodic functions f with L p -bounded mixed derivative (difference). The (possibly non-linear) reconstruction algorithm is supposed to recover the function from function values, sampled on a discrete set of n sampling nodes. The general performance is measured in terms of (non-)linear sampling widths ̺ n . We conduct a systematic analysis of Smolyak type interpolation algorithms in the framework of Besov-Lizorkin-Triebel spaces of dominating mixed smoothness based on specifically tailored discrete Littlewood-Paley type characterizations. As a consequence, we provide sharp upper bounds for the asymptotic order of the (non-)linear sampling widths in various situations and close some gaps in the existing literature. For example, in case 2 ≤ p < q < ∞ and r > 1/p the linear sampling widthsshow the asymptotic behavior of the corresponding Gelfand n-widths, whereas in case 1 < p < q ≤ 2 and r > 1/p the linear sampling widths match the corresponding linear widths. In the mentioned cases linear Smolyak interpolation based on univariate classical trigonometric interpolation turns out to be optimal.
We consider the order of convergence for linear and nonlinear Monte Carlo approximation of compact embeddings from Sobolev spaces of dominating mixed smoothness defined on the torus T d into the space L ∞ (T d ) via methods that use arbitrary linear information. These cases are interesting because we can gain a speedup of up to 1/2 in the main rate compared to the worst case approximation. In doing so we determine the rate for some cases that have been left open by Fang and Duan.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.