2010
DOI: 10.1007/s12532-010-0020-6
|View full text |Cite
|
Sign up to set email alerts
|

An inexact interior point method for L 1-regularized sparse covariance selection

Abstract: Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal-dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal-dual path-following interior-point algorithm for solving large scale log-det SDP problems arising from s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
116
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 89 publications
(116 citation statements)
references
References 43 publications
0
116
0
Order By: Relevance
“…Following experiments done by [14] and [24], we use the biology datasets preprocessed by [16] to compare the performance of our algorithm with EM, in the hypothetical case that values were missing from data. This is an interesting case in practice, as collecting hundreds of biological parameters for each experiment may become expensive.…”
Section: Experiments On Synthetic Datasetsmentioning
confidence: 99%
“…Following experiments done by [14] and [24], we use the biology datasets preprocessed by [16] to compare the performance of our algorithm with EM, in the hypothetical case that values were missing from data. This is an interesting case in practice, as collecting hundreds of biological parameters for each experiment may become expensive.…”
Section: Experiments On Synthetic Datasetsmentioning
confidence: 99%
“…Unfortunately, it is impossible to solve (1.2) efficiently on a common PC via IPMs when is large, say more than 200. As a result, many customized algorithms for solving (1.2) and related problems have been designed in the literature, e.g., block coordinate descent method [21,2,61], projected subgradient method [16], Nesterov's first-order methods [41,42] and their variants [2,34,35], alternating direction method (ADM) [60], Newton-CG based proximal point algorithm (PPA) [53], and inexact IPM with effective preconditioners [33]. In general, first-order algorithms (block coordinate descent, projected subgradient, ADM, Nesterov's methods and their variants) are easily implementable and fast to obtain low/moderate accuracy solutions.…”
Section: ર0mentioning
confidence: 99%
“…Several efficient optimization techniques are available for solving this problem. Examples include GLasso (Friedman et al, 2008), PSM (Duchi et al, 2008a), IPM (Li and Toh, 2010), SINCO (Scheinberg and Rish, 2010), ADMM (Yuan, 2009; and QUIC (Hsieh et al, 2011).…”
Section: Sparse Estimation Of Ggmmentioning
confidence: 99%
“…In several previous studies, synthetic sparse precision matrices are generated in a naive manner, that is, just adding a properly scaled identity matrix to a sparse symmetric matrix so that the resulting matrix is sparse and positive definite (Banerjee et al, 2008;Wang et al, 2009;Li and Toh, 2010). In our simulations, we take a different approach to generating a sparse precision matrix for compatibility with the next step.…”
Section: Appendix B1 Generation Of a Sparse Precision Matrixmentioning
confidence: 99%
See 1 more Smart Citation