2014
DOI: 10.1080/10556788.2013.858155
|View full text |Cite
|
Sign up to set email alerts
|

The spectral bundle method with second-order information

Abstract: The spectral bundle (SB) method was introduced by Helmberg and Rend [A spectral bundle method for semidefinite programming. SIAM J. Optim. 10 (2000), pp. 673-696] to solve a class of eigenvalue optimization problems that is equivalent to the class of semidefinite programs with the constant trace property. We investigate the feasibility and effectiveness of including full or partial second-order information in the SB method, building on work of Overton [On minimizing the maximum eigenvalue of a symmetric matrix… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 30 publications
0
13
0
Order By: Relevance
“…Large scale SDPs pointed research towards first-order approaches, which are more computationally appealing. For linear f , we note among others the work of [76], a provably convergent alternating direction augmented Lagrangian algorithm, and that of Helmberg and Rendl [36], where they develop an efficient first-order spectral bundle method for SDPs with the constant trace property; see also [35] for extensions on this line of work. In both cases, no convergence rate guarantees are provided; see also [57].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Large scale SDPs pointed research towards first-order approaches, which are more computationally appealing. For linear f , we note among others the work of [76], a provably convergent alternating direction augmented Lagrangian algorithm, and that of Helmberg and Rendl [36], where they develop an efficient first-order spectral bundle method for SDPs with the constant trace property; see also [35] for extensions on this line of work. In both cases, no convergence rate guarantees are provided; see also [57].…”
Section: Related Workmentioning
confidence: 99%
“…In the above inequalities (35), we used the fact that symmetric version of A is a PSD matrix, where A := ∇f (X)∆(U +U r R U ) = ∇f (X)•(X−X r ) is a PSD matrix, i.e., given a vector y, y ∇f (X)•(X−X r )y ≥ 0. To show this, let g(t) = f (X + tyy ) be a function from R → R. Hence, ∇g(t) = ∇f (X + tyy ), yy .…”
Section: Main Lemmas For the Smooth Casementioning
confidence: 99%
“…It repeatedly projects A (ω) to small subspaces, and minimizes the largest eigenvalue of the resulting projected matrix-valued function. Such subspace ideas have also been explored in special contexts such as convex semidefinite programs (Helmberg & Rendl (2000); Helmberg et al (2014)), and the computation of the pseudospectral abscissa (Kressner & Vandereycken (2014); Meerbergen et al (2017)).…”
Section: Introductionmentioning
confidence: 99%
“…section 4.1) is only one possible algorithm among many. Other more specialized methods, such as the spectral bundle method of Helmberg and Rendl (2000), its second-order variant (Helmberg, Overton, and Rendl, 2014), or the stochastic-gradient method of d'Aspremont and Karoui (2014), may prove effective alternatives.…”
Section: Blind Deconvolutionmentioning
confidence: 99%