2013
DOI: 10.1109/tasl.2013.2263142
|View full text |Cite
|
Sign up to set email alerts
|

Second Order Methods for Optimizing Convex Matrix Functions and Sparse Covariance Clustering

Abstract: A variety of first-order methods have recently been proposed for solving matrix optimization problems arising in machine learning. The premise for utilizing such algorithms is that second order information is too expensive to employ, and so simple first-order iterations are likely to be optimal. In this paper, we argue that second-order information is in fact efficiently accessible in many matrix optimization problems, and can be effectively incorporated into optimization algorithms. We begin by reviewing how … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2013
2013
2013
2013

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…As discussed in [7], the set of possible direct products that may be formed by any two matrices can be fully specified via the outer, Kronecker, and recently introduced box product. In this paper, for simplicity and concreteness, we restrict our attention to a slightly simpler form of W :…”
Section: Modelmentioning
confidence: 99%
“…As discussed in [7], the set of possible direct products that may be formed by any two matrices can be fully specified via the outer, Kronecker, and recently introduced box product. In this paper, for simplicity and concreteness, we restrict our attention to a slightly simpler form of W :…”
Section: Modelmentioning
confidence: 99%