1991
DOI: 10.1214/aos/1176348114
|View full text |Cite
|
Sign up to set email alerts
|

Geometrizing Rates of Convergence, II

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
116
0

Year Published

1997
1997
2014
2014

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 192 publications
(123 citation statements)
references
References 49 publications
5
116
0
Order By: Relevance
“…. , y n ) , they use the absolute variation metric (L 1 norm) 11 d n (θ, θ ) = y |f (y; θ) − f (y; θ )|dy, and show that, if a δ −1 n consistent estimator exists, then, for each θ ∈ Θ and every > 0, there exists a positive number t 0 such that, for any t ≥ t 0 ,…”
Section: Maximal Uniform Convergence Ratesmentioning
confidence: 99%
“…. , y n ) , they use the absolute variation metric (L 1 norm) 11 d n (θ, θ ) = y |f (y; θ) − f (y; θ )|dy, and show that, if a δ −1 n consistent estimator exists, then, for each θ ∈ Θ and every > 0, there exists a positive number t 0 such that, for any t ≥ t 0 ,…”
Section: Maximal Uniform Convergence Ratesmentioning
confidence: 99%
“…Example 2 (continued): Prakasa Rao (1968) shows that, for α = 1 and 0 < β < 1/2, the maximum likelihood estimator for the location parameter θ converges at the (inverse) 11 Donoho and Liu (1991b) also treat some nonlinear cases: estimating the rate of decay and the mode of a density, and robust nonparametric regression.…”
Section: Maximal Uniform Convergence Ratesmentioning
confidence: 99%
“…Here we only reect on \general" ideas true to the Le Cam spirit. The proof uses a rescaling argument similar to Theorem 1 of [4] The following lower bound is along the lines of [2,3].…”
Section: Rigor?mentioning
confidence: 99%
“…For various types of measurements Y n = ( y 1 ; y 2 ; : : : ; y n ), problems of this form arise in statistical settings, such as nonparametric density estimation and nonparametric regression estimation; but they also arise in signal recovery and image processing. In such problems, there generally exists an \optimal rate of convergence": the minimax risk from n observations, R(n) = infT sup f2F E(T(Y n ) T(f)) 2 , tends to zero as R(n) n r . There is an extensive literature on the determination of such optimal rates for a variety of functionals T, function classes F, and types of observation Y n ; the literature is really too extensive to list here, although we mention [7], [15], and [16].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation