2005
DOI: 10.1137/s1052623403426532
|View full text |Cite
|
Sign up to set email alerts
|

On the Convergence of Successive Linear-Quadratic Programming Algorithms

Abstract: The global convergence properties of a class of penalty methods for nonlinear programming are analyzed. These methods include successive linear programming approaches, and more specifically, the successive linear-quadratic programming approach presented by Byrd, Gould, Nocedal and Waltz (Math. Programming 100(1): 2004). Every iteration requires the solution of two trust-region subproblems involving piecewise linear and quadratic models, respectively. It is shown that, for a fixed penalty parameter, the seque… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
58
0
1

Year Published

2006
2006
2023
2023

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 49 publications
(60 citation statements)
references
References 14 publications
1
58
0
1
Order By: Relevance
“…The penalty update algorithm above guarantees that ν is chosen large enough to ensure convergence to a stationary point [4]. Although the procedure does require the solution of some additional linear programs, our experience is that it results in an overall savings in iterations (and total LP solves) by achieving a better penalty parameter value more quickly, compared with rules which update the penalty parameter based on monitoring progress in feasibility.…”
Section: Penalty Parameter Update Strategymentioning
confidence: 99%
“…The penalty update algorithm above guarantees that ν is chosen large enough to ensure convergence to a stationary point [4]. Although the procedure does require the solution of some additional linear programs, our experience is that it results in an overall savings in iterations (and total LP solves) by achieving a better penalty parameter value more quickly, compared with rules which update the penalty parameter based on monitoring progress in feasibility.…”
Section: Penalty Parameter Update Strategymentioning
confidence: 99%
“…[7]. This property is the most appealing feature of exact penalty methods because one choice of ν may be adequate for the entire minimization procedure.…”
Section: Classical Penalty Frameworkmentioning
confidence: 99%
“…A global convergence analysis of a penalty SLQP method is given in [7]. In that study, condition (5.14) is replaced by the condition…”
Section: Application To a Sequentialmentioning
confidence: 99%
“…To the best of our knowledge, the results presented here are the first worst-case global evaluation bounds for constrained optimization when both the objective and the constraints are allowed to be nonconvex. For approximate optimality for problem (1.3), we are content with getting sufficiently close to a KKT point of our problem (1.3), namely, to any x * satisfying g(x * ) + J(x * ) T y * = 0 and c(x * ) = 0, (1.4) for some Lagrange multiplier y * ∈ IR m , where g denotes the gradient of f , and J, the Jacobian of the constraints c. Recall that the KKT points (1.4) of (1.3) correspond to critical points of (1.2) for sufficiently large ρ provided usual constraint qualifications hold [1,6,15]. The exact penalty algorithm for solving (1.3) proceeds by sequentially minimizing the penalty function (1.2) using the trust-region or quadratic-regularization approach, and then adaptively increasing the penalty parameter ρ through a steering procedure [1].…”
Section: Introductionmentioning
confidence: 99%
“…For approximate optimality for problem (1.3), we are content with getting sufficiently close to a KKT point of our problem (1.3), namely, to any x * satisfying g(x * ) + J(x * ) T y * = 0 and c(x * ) = 0, (1.4) for some Lagrange multiplier y * ∈ IR m , where g denotes the gradient of f , and J, the Jacobian of the constraints c. Recall that the KKT points (1.4) of (1.3) correspond to critical points of (1.2) for sufficiently large ρ provided usual constraint qualifications hold [1,6,15]. The exact penalty algorithm for solving (1.3) proceeds by sequentially minimizing the penalty function (1.2) using the trust-region or quadratic-regularization approach, and then adaptively increasing the penalty parameter ρ through a steering procedure [1]. We obtain that when the penalty parameter is bounded-which is a reasonable assumption since the penalty is exact-the exact penalty algorithm takes at most O(ǫ −2 ) total problem-evaluations to satisfy the KKT conditions (1.4) within ǫ or reach within ǫ of an infeasible (first-order) critical point of the feasibility measure c(x) .…”
Section: Introductionmentioning
confidence: 99%