2013
DOI: 10.1137/11085801x
|View full text |Cite
|
Sign up to set email alerts
|

A Randomized Mirror-Prox Method for Solving Structured Large-Scale Matrix Saddle-Point Problems

Abstract: In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov's Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problems they can solve. Our method allows us to use stochastic approximations of matrix exponentials. We prove that our rand… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(27 citation statements)
references
References 12 publications
0
27
0
Order By: Relevance
“…The resulting MMW algorithm can be interpreted as performing gradient descent in a dual space and using the matrix exponential map to transfer information back to the primal space. To scale this approach to larger problems, researchers have proposed linearization, random projection, sparsification techniques, and stochastic Lanczos quadrature to approximate the matrix exponential [11,7,80,40,41,8,13,61,29]. Even so, the reduction to a sequence of feasibility problems makes this technique impractical for general SDPs.…”
Section: Datasets and Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…The resulting MMW algorithm can be interpreted as performing gradient descent in a dual space and using the matrix exponential map to transfer information back to the primal space. To scale this approach to larger problems, researchers have proposed linearization, random projection, sparsification techniques, and stochastic Lanczos quadrature to approximate the matrix exponential [11,7,80,40,41,8,13,61,29]. Even so, the reduction to a sequence of feasibility problems makes this technique impractical for general SDPs.…”
Section: Datasets and Evaluationmentioning
confidence: 99%
“…Even so, the reduction to a sequence of feasibility problems makes this technique impractical for general SDPs. We are aware of only one computational evaluation of the MMW idea [13].…”
Section: Datasets and Evaluationmentioning
confidence: 99%
“…Finally, we remark that the noise of our random estimates of matrixvector products can be reduced by taking the average of several realizations of the estimate. For more details on this subject, see Juditsky et al (2013a) and Baes et al (2013). due to w t = tw, we have Ψ(0) ≥ Ψ(w t )−qt, and by convexity of ω(•) we have…”
Section: Randomizationmentioning
confidence: 99%
“…Subsampling techniques were also used in [d 'Aspremont, 2011] to reduce the cost per iteration of stochastic averaging algorithms. Finally, in results that are similar, Baes et al [2011] use stochastic approximations of the matrix exponential to reduce the cost per iteration of smooth first-order methods. The complexity tradeoff and algorithms in this last result are different from ours (roughly speaking, a 1/ term is substituted to the √ n term in our bound), but both methods seek to reduce the cost of smooth first-order algorithms for semidefinite programming using stochastic gradient oracles instead of deterministic ones.…”
Section: Introductionmentioning
confidence: 99%