1999
DOI: 10.1214/aop/1022677459
|View full text |Cite
|
Sign up to set email alerts
|

Approximation, Metric Entropy and Small Ball Estimates for Gaussian Measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
99
0

Year Published

2002
2002
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 156 publications
(105 citation statements)
references
References 25 publications
6
99
0
Order By: Relevance
“…In the proof (see ( Remark 3.5 (Rescaled Gaussian Priors). While the use of Gaussian process techniques [3,15,26] in the proof of Theorem 3.2 is inspired by previous work in [42,43] and also [17] for 'direct' problems, the inverse setting poses several challenges, particularly in the nonlinear case. In our proofs we show how these challenges can be overcome by shrinking common Gaussian process priors towards the origin as in (3.4)-the shrinkage enforces the necessary additional 'a priori' regularisation of the posterior distribution to permit the use of our stability estimates.…”
Section: Remarks and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the proof (see ( Remark 3.5 (Rescaled Gaussian Priors). While the use of Gaussian process techniques [3,15,26] in the proof of Theorem 3.2 is inspired by previous work in [42,43] and also [17] for 'direct' problems, the inverse setting poses several challenges, particularly in the nonlinear case. In our proofs we show how these challenges can be overcome by shrinking common Gaussian process priors towards the origin as in (3.4)-the shrinkage enforces the necessary additional 'a priori' regularisation of the posterior distribution to permit the use of our stability estimates.…”
Section: Remarks and Discussionmentioning
confidence: 99%
“…Since our regression functions C take values in SO.n/, they are uniformly bounded and the usual Hellinger distance occurring in such contraction theorems is then Lipschitz-equivalent to the standard L 2 -distance (see Lemma 5.14). Then Lemma 5.16 uses results of [26] to show that the key small ball condition in Theorem 5.13 can be verified for the Gaussian priors from Condition 3.1 even after they have been shrunk towards 0, if the true matrix field 0 belongs to the RKHS H.…”
Section: Consistency Of the Posterior Mean: Proof Of Theorem 32mentioning
confidence: 99%
“…(1.9) below. Such small ball conditions have been extensively studied in the theory of Gaussian measures; see for instance [21,22]. An important result in that area of research shows that the small ball probability of a Gaussian measure μ is closely related to the behavior of the entropy numbers of the unit ball K μ of a certain reproducing kernel Hilbert space H μ associated with μ.…”
Section: Small Ball Probabilities and Gaussian Measuresmentioning
confidence: 99%
“…To see that the questions considered in [21,22] are different from the ones studied here, note that the Gaussian measures considered in [21,22] are not supported on K μ and furthermore that the entropy numbers of K μ always satisfy H (K μ , ε) ∈ o(ε −2 ) as ε → 0, a property that is in general not shared by the signal classes S = Ball 0,1;B τ p,q (Ω; R) and S := Ball 0, 1; W k, p (Ω) that we consider. Finally, we mention that a (non-trivial) modification of our proof shows that the measure P constructed in Theorem 1 can be chosen to be (the restriction of) a suitable centered Gaussian measure.…”
Section: Small Ball Probabilities and Gaussian Measuresmentioning
confidence: 99%
“…; e.g., see the books by Kolmogorov and Tihomirov (1961), Lorentz (1966), Carl and Stephani (1990), Edmunds and Triebel (1996). Among many beautiful results are the duality theorem (Tomczak-Jaegermann (1987), Artstein et.al (2004)), and the small ball probability connection (Kuelbs and Li (1993), Li and Linde (1999)), which will be used in this paper. Nevertheless, the estimate of metric entropy for specific function classes remains difficult, especially the lower bound estimate, which often requires a construction of a wellseparated subset.…”
Section: Introductionmentioning
confidence: 97%