2016
DOI: 10.48550/arxiv.1611.00798
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cross-validation based Nonlinear Shrinkage

Abstract: Many machine learning algorithms require precise estimates of covariance matrices. The sample covariance matrix performs poorly in high-dimensional settings, which has stimulated the development of alternative methods, the majority based on factor models and shrinkage. Recent work of Ledoit and Wolf has extended the shrinkage framework to Nonlinear Shrinkage (NLS), a more powerful covariance estimator based on Random Matrix Theory. Our contribution shows that, contrary to claims in the literature, cross-valida… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
23
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(23 citation statements)
references
References 12 publications
(22 reference statements)
0
23
0
Order By: Relevance
“…We compare the out-of-sample risk computed from BAHC and several other well-known methods: the classic Ledoit and Wolf linear shrinkage method (LW henceforth) [2] and the more recent nonlinear shrinkage approach based on the inversion of the QuEST function (QuEST) [7]. We also include the Cross-Validated eigenvalue shrinkage (CV) [8] and HCAL [5], denoted by <.…”
Section: Risk Minimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare the out-of-sample risk computed from BAHC and several other well-known methods: the classic Ledoit and Wolf linear shrinkage method (LW henceforth) [2] and the more recent nonlinear shrinkage approach based on the inversion of the QuEST function (QuEST) [7]. We also include the Cross-Validated eigenvalue shrinkage (CV) [8] and HCAL [5], denoted by <.…”
Section: Risk Minimizationmentioning
confidence: 99%
“…The usual setting is to have n objects and t features and to compute the correlation matrix between these n objects. Recent results on Rotationally Invariant Estimators [6] propose non-linear shrinkage methods able to correct the eigenvalue spectrum of covariance matrices optimally: the inversion of the QuEST function [7], the Cross-Validated (CV) eigenvalue shrinkage [8] and the IW-regularization [1], the latter being valid only in the low dimensional regime q = n/t < 1, i.e., when there are more features than objects. Eigenvector filtering is more complex.…”
mentioning
confidence: 99%
“…A widely applied method is linear shrinkage [3]. More recently, several methods of non-linear shrinkage (NLS) have been introduced [4,5,6,7]. They all belong to the same family of estimators, known as Rotationally Invariant Estimators (RIEs): they filter the covariance matrix eigenvalues while keeping its eigenvectors untouched.…”
Section: Introductionmentioning
confidence: 99%
“…In a stationary setting, the RIE that uses the Oracle eigenvalues can be shown to minimise the Frobenius distance between the filtered covariance matrix and the true one, or, in a dynamical context, the realized covariance matrix. A truly remarkable result shows how to build an approximation of the Oracle eigenvalues with data from the calibration window only that converges to the Oracle estimator if the system is very large, stationary, and has not too heavy-tailed data [4,5,6,7]. We denote this estimator by osRIE (optimal stationary RIE).…”
Section: Introductionmentioning
confidence: 99%
“…Remarkable recent progresses lead to the proof that provided if n < t and if the system is stationary, the Rotationally Invariant Estimator (RIE) (Bun et al 2016) converges to the oracle estimator (which knows the realized correlation matrix) at fixed ratio q = t/n and in the large system limit n and t → ∞. In practice, computing RIE is far from trivial for finite n, i.e., for sparse eigenvalue densities; several numerical methods address this problem, such as QuEST (Ledoit, Wolf et al 2012), Inverse Wishart regularisation (Bun, Bouchaud, and Potters 2017), or the cross-validated approach (CV hereafter) (Bartz 2016). Note that these methods only modify the eigenvalues, and keep the empirical eigenvectors intact.…”
Section: Introductionmentioning
confidence: 99%