2008
DOI: 10.1214/07-aoas139
|View full text |Cite
|
Sign up to set email alerts
|

Sparse estimation of large covariance matrices via a nested Lasso penalty

Abstract: The paper proposes a new covariance estimator for large covariance matrices when the variables have a natural ordering. Using the Cholesky decomposition of the inverse, we impose a banded structure on the Cholesky factor, and select the bandwidth adaptively for each row of the Cholesky factor, using a novel penalty we call nested Lasso. This structure has more flexibility than regular banding, but, unlike regular Lasso applied to the entries of the Cholesky factor, results in a sparse estimator for the inverse… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
164
0
1

Year Published

2009
2009
2018
2018

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 168 publications
(173 citation statements)
references
References 34 publications
0
164
0
1
Order By: Relevance
“…This is a well-known problem in statistics [14], [15], giving raise to a broad range on literature on alternative estimators e.g. [15], [16], [17], [18], [19], [20], [21], [22], [23]. The quality of a maximum likelihood estimate may be insufficient -especially for high-dimensional spaces, see e.g.…”
Section: B Concerning the Covariance Matrix Adaptation mentioning
confidence: 99%
See 1 more Smart Citation
“…This is a well-known problem in statistics [14], [15], giving raise to a broad range on literature on alternative estimators e.g. [15], [16], [17], [18], [19], [20], [21], [22], [23]. The quality of a maximum likelihood estimate may be insufficient -especially for high-dimensional spaces, see e.g.…”
Section: B Concerning the Covariance Matrix Adaptation mentioning
confidence: 99%
“…This paper focuses on the noise-less test suite which contains 24 functions [30]. They can be divided into four classes: separable functions (function ids 1-5), functions with low/moderate conditioning (ids 6-9), functions with high conditioning (ids [10][11][12][13][14], and two groups of multimodal functions (ids [15][16][17][18][19][20][21][22][23][24]. Among the unimodal functions with only one optimal point, there are separable functions given by the general formula…”
Section: A Test Suitementioning
confidence: 99%
“…There are several papers recently proposed in the literature to tackle this problem as, for example, Bickel and Levina (2008), Levina et al (2008), Fan et al (2008), Wang and Zou (2010), Bai and Shi (2011), Fan et al (2011), Fan et al (2012), Fan et al (2012a), Hautsch et al (2009), Fan et al (2013), and Lunde et al (2013) among many others. Our goal is to build an econometric methodology which will be used to construct dynamic models to forecast large covariance matrices estimated elsewhere.…”
Section: Introductionmentioning
confidence: 99%
“…This unconstrained reparameterization and its statistical interpretability makes it easy to incorporate covariates in covariance modeling and to cast the joint modeling of mean and covariance into the generalized linear model framework. The methodology has proved to be useful in recent literature; see for example, Pourahmadi and Daniels (2002), Pan and MacKenzie (2003), Ye and Pan (2006), Daniels (2006), Huang et al (2006), Levina et al (2008), Yap et al (2009), and Lin and Wang (2009).…”
Section: Introductionmentioning
confidence: 99%