2018
DOI: 10.1214/18-ba1094
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Bayesian Minimax Rates for Unconstrained Large Covariance Matrices

Abstract: We obtain the optimal Bayesian minimax rate for the unconstrained large covariance matrix of multivariate normal sample with mean zero, when both the sample size, n, and the dimension, p, of the covariance matrix tend to infinity. Traditionally the posterior convergence rate is used to compare the frequentist asymptotic performance of priors, but defining the optimality with it is elusive. We propose a new decision theoretic framework for prior selection and define Bayesian minimax rate. Under the proposed fra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
31
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

5
2

Authors

Journals

citations
Cited by 20 publications
(31 citation statements)
references
References 30 publications
0
31
0
Order By: Relevance
“…for all sufficiently large n and some constant C λ > 0, where the last inequality follows from Lemma B.7 in Lee and Lee (2018). Also note that (n −1 X T n X n ) −1 P (j) 0,l − (Σ 0,P (j) 0,l…”
Section: Appendix A: Proofs Of Main Theoremsmentioning
confidence: 91%
See 2 more Smart Citations
“…for all sufficiently large n and some constant C λ > 0, where the last inequality follows from Lemma B.7 in Lee and Lee (2018). Also note that (n −1 X T n X n ) −1 P (j) 0,l − (Σ 0,P (j) 0,l…”
Section: Appendix A: Proofs Of Main Theoremsmentioning
confidence: 91%
“…then E π ((Σ P (j) 0,l on the eventÑ nj (C λ ) c , for some positive constants c 1 and c 2 depending only on C λ . We note here that we are using different parametrization for Wishart and inverse Wishart distributions compared to Lee and Lee (2018). Moreover, by Lemma B.7 in Lee and Lee (2018) and Condition (B4),…”
Section: Appendix A: Proofs Of Main Theoremsmentioning
confidence: 99%
See 1 more Smart Citation
“…(iii) If γ(k) = Ck −α for some constants α > 0 and C > 0, then we have inf Ωn sup Ω 0,n ∈U ( 0 ,γ)E 0n Ω n − Ω 0Remark Since a frequentist minimax lower bound is also a P-loss minimax lower bound, Theorem 3.1 implies a P-loss minimax lower bound. For the proof of this argument, seeLee and Lee (2017).To the best of our knowledge, there is no frequentist minimax lower bound result on this setting. The estimation of precision matrix with polynomially banded Cholesky factor under the spectral norm was studied by Bickel and Levina (2008b), but they did not consider the minimax lower bound.…”
mentioning
confidence: 94%
“…Consider an empirical Bayes approach by considering priors (4) and (5) with k instead of imposing a prior on k. This empirical Bayes method faciliates easy implementations when the estimation of Cholesky factor or precision matrix is of interest. To assess the performance, we adopt the P-loss convergence rate used by Castillo (2014) and Lee and Lee (2018). Corollary .1 presents the P-loss convergence rate of the empirical Bayes approach with respect to the Cholesky factor under the matrix ∞ -norm.…”
mentioning
confidence: 99%