We show that independent and uniformly distributed sampling points are as good as optimal sampling points for the approximation of functions from the Sobolev space W s p (Ω) on bounded convex domains Ω ⊂ R d in the L q -norm if q < p. More generally, we characterize the quality of arbitrary sampling points P ⊂ Ω via the L γ (Ω)-norm of the distance function dist(•, P ), where γ = s(1/q − 1/p) −1 if q < p and γ = ∞ if q ≥ p. This improves upon previous characterizations based on the covering radius of P .
We show that the isotropic discrepancy of a lattice point set can be bounded from below and from above in terms of the spectral test of the corresponding integration lattice. From this we deduce that the isotropic discrepancy of any N -element lattice point set in [0, 1) d is at least of order N −1/d . This order of magnitude is best possible for lattice point sets in dimension d.
We study L q -approximation and integration for functions from the Sobolev space W s p (Ω) and compare optimal randomized (Monte Carlo) algorithms with algorithms that can only use iid sample points, uniformly distributed on the domain. The main result is that we obtain the same optimal rate of convergence if we restrict to iid sampling, a common assumption in learning and uncertainty quantification. The only exception is when p = q = ∞, where a logarithmic loss cannot be avoided.
In this work the ℓ q -norms of points chosen uniformly at random in a centered regular simplex in high dimensions are studied. Berry-Esseen bounds in the regime 1 ≤ q < ∞ are derived and complemented by a non-central limit theorem together with moderate and large deviations in the case where q = ∞. A comparison with corresponding results for ℓ n p -balls is carried out as well.
We study
L
q
L_q
-approximation and integration for functions from the Sobolev space
W
p
s
(
Ω
)
W^s_p(\Omega )
and compare optimal randomized (Monte Carlo) algorithms with algorithms that can only use identically distributed (iid) sample points, uniformly distributed on the domain. The main result is that we obtain the same optimal rate of convergence if we restrict to iid sampling, a common assumption in learning and uncertainty quantification. The only exception is when
p
=
q
=
∞
p=q=\infty
, where a logarithmic loss cannot be avoided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.