2019
DOI: 10.48550/arxiv.1906.11357
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Inexact Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(19 citation statements)
references
References 0 publications
1
17
0
Order By: Relevance
“…Namely, we need to show that ALM converges to an AFAC point in polynomial time. A step in this direction was recently given by Sahin et al [31]. They proved that ALM computes a point satisfying the 2nd-order condition for the augmented Lagrangian function.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Namely, we need to show that ALM converges to an AFAC point in polynomial time. A step in this direction was recently given by Sahin et al [31]. They proved that ALM computes a point satisfying the 2nd-order condition for the augmented Lagrangian function.…”
Section: Discussionmentioning
confidence: 99%
“…As for approximately 2-critical points, we are only aware of [17,31]. But both papers use a different 2nd-order condition, which is not easy to translate into our setting.…”
Section: Critical Points In Nonlinear Programmingmentioning
confidence: 99%
“…Paper [16] extends this previous work by presenting a hybrid penalty/AL-based method that also obtains the same aforementioned complexity. Finally, papers [26] and [15] respectively present O(ε −3 log ε −1 ) and O(ε −5/2 log ε −1 ) iteration complexities of some AL-based methods that perform under-relaxed Lagrange multiplier updates only when the penalty parameter is updated. It is worth noting that when these methods initialize the penalty parameter on the order of O(1), they perform a O(log ε −1 ) number of multiplier updates of the form (4) with θ = 0, χ = χ k , and χ k approaching 0.…”
Section: Introductionmentioning
confidence: 99%
“…For the case where each component of g is convex and K = {0} × ℜ k ++ , i.e., the constraint is of the form g(x) = 0 and/or g(x) ≤ 0, papers [16,29] present PAL methods that perform Lagrange multiplier updates only when the penalty parameter is updated. Hence, if the penalty parameter is never updated (which usually happens when the initial penalty parameter is chosen to be sufficiently large), then these methods never perform Lagrange multiplier updates, and thus they behave more like penalty methods.…”
Section: Introductionmentioning
confidence: 99%
“…It is now worth discussing how NL-IAPIAL compares with the works [21,9,29,16,7,31]. First, it extends the IAPIAL method of [21] to the case where g is K-convex rather than K = {0} and g being affine.…”
Section: Introductionmentioning
confidence: 99%