2018
DOI: 10.1016/j.csda.2018.07.011
|View full text |Cite
|
Sign up to set email alerts
|

An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 37 publications
0
15
0
Order By: Relevance
“…Denote J cT := (I + cT ) −1 , i.e., a resolvent operator, then the DRs [4] for solving (12) is with the form…”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations
“…Denote J cT := (I + cT ) −1 , i.e., a resolvent operator, then the DRs [4] for solving (12) is with the form…”
Section: Preliminariesmentioning
confidence: 99%
“…In this section, we construct a series of numerical experiments using simulated convex composite optimization problems and sparse inverse covariance matrix estimation problems to demonstrate the feasibility and efficiency of the algorithm MGADMM. Besides, we also test against the state-of-the-art algorithm sPADMM of Li et al [11] and sGS-ADMM of Li & Xiao [12] to evaluate the algorithms' performance. All the experiments are performed with Microsoft Windows 10 and MATLAB R2018a, and run on a desktop PC with an Intel Core i7-8565 CPU at 1.80 GHz and 8 GB of memory.…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang [17] first generated an initial point using the proximal augmented Lagrangian method, then applied the Newton-CG augmented Lagrangian method to problems with an additional convex quadratic term in (P). Li and Xiao [13] employed the symmetric Gauss-Seidel-type ADMM in the same framework of [18]. A more recent work by Zhang et al [21] shows that (P) with simple constraints as X ij = 0 for (i, j) ∈ Ω can be converted into a more computationally tractable problem for large values of ρ.…”
Section: Introductionmentioning
confidence: 99%
“…As shown in [2] that model (1.3) has the potential ability to discover the underlying distribution's structure when possible noises are contained on the sample covariance matrix S. In many applications, block-wise sparsity structure in the inverse covariance matrix is highly desirable. Because of this, many authors [10,14,17,19] considered the following general estimation model based on the adaptive group lasso…”
Section: Introductionmentioning
confidence: 99%