2013
DOI: 10.1137/120864192
|View full text |Cite
|
Sign up to set email alerts
|

A Proximal Point Algorithm for Log-Determinant Optimization with Group Lasso Regularization

Abstract: Abstract. We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting log-determinant optimization problem by the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
25
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 55 publications
0
25
0
Order By: Relevance
“…Recently, the SSN method has received significant amount of attention due to its success in solving structured convex problems to a high accuracy. In particular, it has been successfully applied to solving SDP [90,82], LASSO [54], nearest correlation matrix estimation [64], clustering [76], sparse inverse covariance selection [80], and composite convex minimization [79].…”
Section: The Manpg Algorithmmentioning
confidence: 99%
“…Recently, the SSN method has received significant amount of attention due to its success in solving structured convex problems to a high accuracy. In particular, it has been successfully applied to solving SDP [90,82], LASSO [54], nearest correlation matrix estimation [64], clustering [76], sparse inverse covariance selection [80], and composite convex minimization [79].…”
Section: The Manpg Algorithmmentioning
confidence: 99%
“…It is just this reason that inspired us to apply the AL method to solve the QP-Logdet problem. Great successes of the applications of the AL method to large-scale semidefinite programming problems can also be seen in [46,40,41] and the references therein.…”
Section: Introductionmentioning
confidence: 92%
“…It can be regarded as an extension of the qudratic semidefinite programming problem (QSDP) and the log-determinant (Logdet) problem, so it shares the structures of both problems, and it goes without saying that the QP-Logdet problem is considerable. For the QSDP, it is certainly a heart problem in nonlinear semidefinite programming problems, which has been considered by Toh [35], Toh, Tütüncü and Todd [36,37], Zhao [45], Jiang, Sun and Toh [14], etc.. For the Logdet problem, which has a very important application in covariance selection [5] and has been intensively studied over the past several years, including the work of Dahl, Vandenberghe and Roychowdhury [4], d'Aspremont, Banerjee and El Ghaoui [6], Li and Toh [15], Lu [16,17], Lu and Zhang [18], Olsen, Oztoprak, Nocedal and Rennie [24], Scheinberg, Ma and Goldfarb [30], Scheinberg and Rish [31], Toh [34], Wang, Sun and Toh [40], Yang, Sun and Toh [41], Yang, Shen, Wonka, Lu and Ye [43], Yuan [44], etc.. As far as the QP-Logdet problem be concerned, it also arises in many practical applications such as robust simulation of global warming policies [13], speech recognition [39], and so on. Thus the algorithms developed to solve this kind of problems can potentially find wide applications.…”
Section: Introductionmentioning
confidence: 99%
“…Yuan [20] also proposed an improved Alternating Direction Method (ADM) to solve the sparse covariance problem by introducing an ADM-oriented reformulation. For a more general structured models/problems, Yang et al [19] enhanced the method in [18] to handle block structured sparsity, employing an inexact generalized Newton method to solve the dual semismooth subproblem. They demonstrated that regularization using · 2 or · ∞ norms instead of · 1 in (P) are more suitable for the structured models/problems.…”
Section: Introductionmentioning
confidence: 99%