2004
DOI: 10.1016/s0047-259x(03)00096-4
|View full text |Cite
|
Sign up to set email alerts
|

A well-conditioned estimator for large-dimensional covariance matrices

Abstract: Many economic problems require a covariance matrix estimator that is not only invertible, but also well-conditioned (that is, inverting it does not amplify estimation error). For largedimensional covariance matrices, the usual estimator -the sample covariance matrix-is typically not well-conditioned and may not even be invertible. This paper introduces an estimator that is both well-conditioned and more accurate than the sample covariance matrix asymptotically. This estimator is distribution-free and has a sim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

16
2,185
1
30

Year Published

2007
2007
2018
2018

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,318 publications
(2,374 citation statements)
references
References 23 publications
16
2,185
1
30
Order By: Relevance
“…These differ in the functions utilized and resulting properties, e.g., whether the original order of eigenvalues is maintained or whether the resulting estimates are guaranteed to be nonnegative; see Muirhead (1987) for a review of early work and Srivastava and Kubokawa (1999) for more recent references. Minimizing the squared error loss to determine an optimal amount of shrinkage, Ledoit and Wolf (2004) derived an estimator that regresses sample eigenvalues toward their mean and yields a weighted combination of the sample covariance matrix and an identity matrix. While this has seen diverse applications, including the analysis of high-dimensional genomic data (Schäfer and Strimmer 2005), Daniels and Kass (2001) reported overshrinkage of the smallest roots when eigenvalues were spread far apart and suggested shrinking the log sample eigenvalues toward their posterior mean as an alternative.…”
Section: Bias Due To Sampling Variancementioning
confidence: 99%
“…These differ in the functions utilized and resulting properties, e.g., whether the original order of eigenvalues is maintained or whether the resulting estimates are guaranteed to be nonnegative; see Muirhead (1987) for a review of early work and Srivastava and Kubokawa (1999) for more recent references. Minimizing the squared error loss to determine an optimal amount of shrinkage, Ledoit and Wolf (2004) derived an estimator that regresses sample eigenvalues toward their mean and yields a weighted combination of the sample covariance matrix and an identity matrix. While this has seen diverse applications, including the analysis of high-dimensional genomic data (Schäfer and Strimmer 2005), Daniels and Kass (2001) reported overshrinkage of the smallest roots when eigenvalues were spread far apart and suggested shrinking the log sample eigenvalues toward their posterior mean as an alternative.…”
Section: Bias Due To Sampling Variancementioning
confidence: 99%
“…The original idea is due to Stein (1956), where it was applied to the estimation of the mean vector. Applications to variance matrix estimation include Jorion (1986), Muirhead (1987) and Ledoit and Wolf (2003, 2004a,b, 2012. Intuitively, the role of the shrinkage parameter is to balance the estimation error coming from the ill-conditioned variance matrix and the specification error associated with the target matrix.…”
mentioning
confidence: 99%
“…29 uses the inverse of this matrix, and thus cannot be used in combination with standardization based on empirical estimates of the mean and variance. To overcome this problem, we regularized the SCM via the well-known Ledoit-Wolf (LW) formula 31 . Briefly, given a SCM , the corresponding regularized covariance matrix is given by where is the trace of , is the number of variants, is the identity matrix and is the optimal shrinkage coefficient that minimizes the mean squared error between the estimated and true covariance matrix, whose determination is described by Ledoit and Wolf 31 .…”
Section: Comparison Of Map and Posterior Mean Estimatorsmentioning
confidence: 99%