2015
DOI: 10.1214/15-aos1341
|View full text |Cite
|
Sign up to set email alerts
|

On adaptive posterior concentration rates

Abstract: We investigate the problem of deriving posterior concentration rates under different loss functions in nonparametric Bayes. We first provide a lower bound on posterior coverages of shrinking neighbourhoods that relates the metric or loss under which the shrinking neighbourhood is considered, and an intrinsic pre-metric linked to frequentist separation rates. In the Gaussian white noise model, we construct feasible priors based on a spike and slab procedure reminiscent of wavelet thresholding that achieve adapt… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
113
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 70 publications
(115 citation statements)
references
References 28 publications
2
113
0
Order By: Relevance
“…As we have mentioned in the end of Section 5.2, the current Bayes nonparametric technique for proving posterior contraction rate only covers losses which are at the same order of Kullback-Leiber divergence. It cannot handle other non-intrinsic loss [16]. In the Bayes matrix estimation setting, whether we can show the following conclusion…”
Section: Relation To Matrix Estimation Under Non-frobenius Lossmentioning
confidence: 90%
“…As we have mentioned in the end of Section 5.2, the current Bayes nonparametric technique for proving posterior contraction rate only covers losses which are at the same order of Kullback-Leiber divergence. It cannot handle other non-intrinsic loss [16]. In the Bayes matrix estimation setting, whether we can show the following conclusion…”
Section: Relation To Matrix Estimation Under Non-frobenius Lossmentioning
confidence: 90%
“…Under the condition of posterior concentration rate ε n (θ) at θ, E θ [r n (γ)] ε n ; see [3] for details on this result. So the only thing that remains to be verified is that lim inf…”
Section: On Polished Tail Parameters a Key Notion In This Paper Is Tmentioning
confidence: 93%
“…In [3] it is observed that when the posterior distribution has concentration rate ε n , under some loss function ℓ(·, ·), and if there exists an estimate, say,…”
mentioning
confidence: 99%
“…, X n ) iid ∼ N p (0, Σ 0 ). In the literature, the posterior is said to achieve the minimax rate if its convergence rate is the same as the frequentist minimax rate ( [36]; [21]; [29]). Since the posterior convergence rate cannot be faster than the frequentist minimax rate ( [28]), it is often called the optimal rate of posterior convergence ( [40]; [38]).…”
Section: Decision Theoretic Prior Selectionmentioning
confidence: 99%