2013
DOI: 10.1137/120888867
|View full text |Cite
|
Sign up to set email alerts
|

A Nonmonotone Proximal Bundle Method with (Potentially) Continuous Step Decisions

Abstract: Abstract. We present a convex nondifferentiable minimization algorithm of proximal bundle type that does not rely on measuring descent of the objective function to declare the so-called serious steps; rather, a merit function is defined which is decreased at each iteration, leading to a (potentially) continuous choice of the stepsize between zero (the null step) and one (the serious step). By avoiding the discrete choice the convergence analysis is simplified, and we can more easily obtain efficiency estimates… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 23 publications
(49 reference statements)
0
24
0
Order By: Relevance
“…The proposed formulation is likely to be able to scale to much larger dimensions when using column-generation techniques, although efficiently generating the columns with negative reduced costs would then be nontrivial and would require specific algorithmic developments (Astorino et al 2013;Frangioni et al 2014).…”
Section: Resultsmentioning
confidence: 99%
“…The proposed formulation is likely to be able to scale to much larger dimensions when using column-generation techniques, although efficiently generating the columns with negative reduced costs would then be nontrivial and would require specific algorithmic developments (Astorino et al 2013;Frangioni et al 2014).…”
Section: Resultsmentioning
confidence: 99%
“…Yet, as soon as a SS is performed, each B k can be entirely reset: hence, Assumption 3(i) is weaker than Assumption 2(i). Assumption 3(ii) allows on-line tuning of t , which is well-known to be crucial in practice: it is not necessarily true that "the best" t +1 after a NS is smaller than t (e.g., [1]). Yet, the combined effect of (i) and (ii) is that, during a sequence of consecutive NS, at length the values ν(2.14) are nonincreasing.…”
Section: Assumption 1 Inmentioning
confidence: 99%
“…Having upper estimates available also allows to complement the usual lower model (s of the individual components f k ) of f , that traditionally drive the optimization process, with an upper model that provides upper estimates of f (x) even if no oracle has ever been called at x. This has already been done in [1], but only on a small subset of the search space: exploiting (1.2) we extend the upper model to all of X. This is the fundamental technical idea that allows us to prove convergence without necessarily requiring that all components have been evaluated at SS.…”
Section: Introductionmentioning
confidence: 99%
“…Adopting the above bilevel CV scheme, if we used a grid search, we would need to solve, for each block and for each value of C in the grid, T 2 problems of type (1) [or, equivalently, QP(X)] written in correspondence with T 2 different training sets…”
Section: Model Selection Algorithmmentioning
confidence: 99%