2020
DOI: 10.48550/arxiv.2012.10469
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization

Abstract: Convex optimization over the spectrahedron, i.e., the set of all real n × n positive semidefinite matrices with unit trace, has important applications in machine learning, signal processing and statistics, mainly as a convex relaxation for optimization with low-rank matrices. It is also one of the most prominent examples in the theory of first-order methods for convex optimization in which non-Euclidean methods can be significantly preferable to their Euclidean counterparts, and in particular the Matrix Expone… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 16 publications
(36 reference statements)
0
9
0
Order By: Relevance
“…Since, as in the works [17,18] mentioned before which deal with smooth objectives, strict complementarity plays a key role in our analysis, we refer the interested reader to the recent works [16,39,13,20] which also exploit this property for efficient smooth and convex optimization over the spectrahedron. Strict complementarity has also played an instrumental role in two recent and very influential works which used it to prove linear convergence rates for proximal gradient methods [42,14].…”
Section: Additional Related Workmentioning
confidence: 99%
“…Since, as in the works [17,18] mentioned before which deal with smooth objectives, strict complementarity plays a key role in our analysis, we refer the interested reader to the recent works [16,39,13,20] which also exploit this property for efficient smooth and convex optimization over the spectrahedron. Strict complementarity has also played an instrumental role in two recent and very influential works which used it to prove linear convergence rates for proximal gradient methods [42,14].…”
Section: Additional Related Workmentioning
confidence: 99%
“…In our recent work [21], similar results were obtained for smooth objective functions, when using Non-Eucldiean von Neumann entropy-based gradient methods, a.k.a matrix exponentiated gradient methods [39]. The importance of these methods lies in the fact that they allow to measure the Lipchitz parameters (either of the function or its gradients) with respect to the matrix spectral norm, which can lead to significantly better convergence rates, than when measuring these with respect to the Euclidean norm.…”
Section: Introductionmentioning
confidence: 96%
“…Computing the matrix logarithm and matrix exponent for the MEG update requires computing a full-rank SVD and returns a full-rank matrix, unlike the truncation of the lower eigenvalues in Euclidean projections. Nevertheless, as shown in [21], under a strict complementarity assumption, and when close to an optimal rank-r solution, it is possible to approximate these steps with updates that only require rank-r SVDs, while sufficiently controlling the errors resulting from the approximations. These approximated steps can be written as…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations