2009
DOI: 10.1007/s00365-009-9053-3
|View full text |Cite
|
Sign up to set email alerts
|

Kolmogorov Entropy for Classes of Convex Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
29
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(33 citation statements)
references
References 8 publications
4
29
0
Order By: Relevance
“…This only requires getting tight estimates of the covering number of bounded sequences in K n . We show that this covering number is very similar to the one given in Dryanov (2009) (and hence has no logarithmic factors) as K n is not much bigger than K n . In this way our extra refinement step enables us to save a logarithmic factor of n.…”
Section: Proof Sketch For Theorem 21supporting
confidence: 71%
See 1 more Smart Citation
“…This only requires getting tight estimates of the covering number of bounded sequences in K n . We show that this covering number is very similar to the one given in Dryanov (2009) (and hence has no logarithmic factors) as K n is not much bigger than K n . In this way our extra refinement step enables us to save a logarithmic factor of n.…”
Section: Proof Sketch For Theorem 21supporting
confidence: 71%
“…Specifically the result of Dryanov (2009) gives us tight upper bounds of the covering number for the space {θ ∈ K n : max 1≤i≤n |θ i | ≤ B} for any constant B. But we require covering numbers for K n intersected with a Euclidean ball of radius t. This was done in Guntuboyina and Sen (2013) by applying the basic result of Dryanov (2009) in appropriate subintervals. This approach gives rise to logarithmic factors in the risk bound as had appeared in Guntuboyina and Sen (2013).…”
Section: Proof Sketch For Theorem 21mentioning
confidence: 99%
“…This result somehow extended the one in [23,17,13] to a stronger norm, W 1,1 -norm instead of L 1 -norm. Motivated by the results in [24,23,17,13,3] and a possible application to Hamilton-Jacobi equation with non-strictly convex Hamiltonian, we will provide in the present paper upper and lower estimates of the ε-entropy for a class of uniformly bounded total variation functions in L 1 -space in multi-dimensional cases. In particular, our result shows that the minimal number of functions needed to represent a function with bounded variation with an error ε with respect to L 1 -distance is of the order of 1 ε n .…”
Section: Introductionsupporting
confidence: 74%
“…where n is the dimension of the state variable. The result was previously studied for scalar state variables in [17] and for convex functions that are uniformly bounded and uniformly Lipschitz with a known Lipschitz constant in [13]. These results have direct implications in the study of rates of convergence of empirical minimization procedures (see e.g.…”
Section: Introductionmentioning
confidence: 93%
“…Here means ≤ up to a constant which does not depend on ǫ (but does depend on D, B, and p). The d = 1 case (from Dryanov (2009)) was the fundamental building block in computing global rates of convergence of the univariate log-concave and s-concave MLEs in Doss and Wellner (2016a). In the corresponding statistical problems when d > 1, the domain of the functions under consideration is not restricted to be a hyperrectangle but rather may be an arbitrary convex set D. Thus the results of Guntuboyina and Sen (2013) are not immediately applicable, and there is need for results on more general convex domains D with a more complicated boundary and no Lipschitz constraints.…”
Section: Introductionmentioning
confidence: 99%