2019
DOI: 10.1088/1361-6420/ab1af3
|View full text |Cite
|
Sign up to set email alerts
|

Sparse inverse covariance matrix estimation via the $ \newcommand{\e}{{\rm e}} \ell_{0}$ -norm with Tikhonov regularization

Abstract: Sparse inverse covariance matrix estimation is a fundamental problem in a Gaussian network model and has attracted much attention in the last decade. A widely used approach for estimating the sparse inverse covariance matrix is the regularized negative log-likelihood minimization, where the regularization term promotes the sparsity of the inverse covariance matrix. Although the -norm is the most popular sparsity promoting function due to its convexity, it may not result in sufficiently satisfied estimations. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 37 publications
(140 reference statements)
0
3
0
Order By: Relevance
“…A review of recent methodological papers showed that the Erdo ¨s-Re ´nyi [22][23][24][32][33][34][35][36], Watts-Strogatz [15,33,35,37,38], and Baraba ´si-Albert [23-25, 32, 33, 35, 37, 39-46] models were most commonly used to generate networks, so these will form the basis of comparison for the proposed generator. Block diagonal network structures were also found in some papers [47][48][49], but these are used for specific comparisons rather than assessing general performance.…”
Section: Previous Workmentioning
confidence: 99%
“…A review of recent methodological papers showed that the Erdo ¨s-Re ´nyi [22][23][24][32][33][34][35][36], Watts-Strogatz [15,33,35,37,38], and Baraba ´si-Albert [23-25, 32, 33, 35, 37, 39-46] models were most commonly used to generate networks, so these will form the basis of comparison for the proposed generator. Block diagonal network structures were also found in some papers [47][48][49], but these are used for specific comparisons rather than assessing general performance.…”
Section: Previous Workmentioning
confidence: 99%
“…graphical Lasso (Jerome et al, 2007), or gLasso, and a recent Penalized Likelihood method with Tikhonov regularization under an ℓ 0 constraint (Liu and Zhang, 2019), or PLT 0 . Graphical lasso estimates Σ −1 by min {− log( det X) + tr(SX) + λ X 1 } (23) over non-negative definite matrices X, where λ > 0 is the regularization parameter.…”
Section: Simulationmentioning
confidence: 99%
“…In the discrete setting, also other forms of regularization have been adopted, e.g. sparsity on the inverse covariance matrix (Friedman et al 2008, Liu andZhang 2019).…”
Section: The Case Of Indirectly Observed Functionsmentioning
confidence: 99%