Abstract:Abstract. Many algorithms that ensure second-order necessary optimality conditions were developed in the literature. To the best of our knownledge, none of them guarantee Strong Second-Order Necessary Condition (SSONC). Gould and Toint [5] showed that we do not expect SSONC in the barrier method. In this paper, we argue by an example that the same is true for the second-order augmented Lagrangian method introduced in [1]. This reinforces the Weak Second-Order Necessary Condition as the appropriate condition fo… Show more
“…Indeed, [53] gives an explicit example illustrating that accumulation points of trajectories generated by barrier algorithms will converge to stationary points satisfying the weak second-order necessary condition, but not the strong version. Later, Andreani and Secchin [10] made a small modification in Gould and Toint's counterexample to come to the same conclusion for augmented Lagrangian-type algorithms. Hence, the most we can expect from our method is that it generates points that approximately satisfy the weak second-order necessary optimality conditions.…”
mentioning
confidence: 80%
“…Therefore, conditions (10) and ( 11) both hold. We now check for the complementarity condition ( 14).…”
Section: Per-iteration Analysis and A Bound For The Number Of Iterationsmentioning
A key problem in mathematical imaging, signal processing and computational statistics is the minimization of non-convex objective functions that may be non-differentiable at the relative boundary of the feasible set. This paper proposes a new family of first- and second-order interior-point methods for non-convex optimization problems with linear and conic constraints, combining logarithmically homogeneous barriers with quadratic and cubic regularization respectively. Our approach is based on a potential-reduction mechanism and, under the Lipschitz continuity of the corresponding derivative with respect to the local barrier-induced norm, attains a suitably defined class of approximate first- or second-order KKT points with worst-case iteration complexity $$O(\varepsilon ^{-2})$$
O
(
ε
-
2
)
(first-order) and $$O(\varepsilon ^{-3/2})$$
O
(
ε
-
3
/
2
)
(second-order), respectively. Based on these findings, we develop new path-following schemes attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained non-convex optimization problems.
“…Indeed, [53] gives an explicit example illustrating that accumulation points of trajectories generated by barrier algorithms will converge to stationary points satisfying the weak second-order necessary condition, but not the strong version. Later, Andreani and Secchin [10] made a small modification in Gould and Toint's counterexample to come to the same conclusion for augmented Lagrangian-type algorithms. Hence, the most we can expect from our method is that it generates points that approximately satisfy the weak second-order necessary optimality conditions.…”
mentioning
confidence: 80%
“…Therefore, conditions (10) and ( 11) both hold. We now check for the complementarity condition ( 14).…”
Section: Per-iteration Analysis and A Bound For The Number Of Iterationsmentioning
A key problem in mathematical imaging, signal processing and computational statistics is the minimization of non-convex objective functions that may be non-differentiable at the relative boundary of the feasible set. This paper proposes a new family of first- and second-order interior-point methods for non-convex optimization problems with linear and conic constraints, combining logarithmically homogeneous barriers with quadratic and cubic regularization respectively. Our approach is based on a potential-reduction mechanism and, under the Lipschitz continuity of the corresponding derivative with respect to the local barrier-induced norm, attains a suitably defined class of approximate first- or second-order KKT points with worst-case iteration complexity $$O(\varepsilon ^{-2})$$
O
(
ε
-
2
)
(first-order) and $$O(\varepsilon ^{-3/2})$$
O
(
ε
-
3
/
2
)
(second-order), respectively. Based on these findings, we develop new path-following schemes attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained non-convex optimization problems.
“…Assim como em PNL, temos que T (x) ̸ ⊃ T BS lin (x) em geral. Por exemplo, considere as restrições −x 3 1…”
Section: Uma Linearização Adequada Do Conjunto Viávelunclassified
“…É bem conhecido no campo da PNL que a condição necessária de segunda ordem forte não pode ser esperada nos pontos limite de algoritmos práticos [3]. Em vez disso, o conceito adequado neste contexto é a condição necessária de segunda ordem fraca, que consiste em relaxar os requisitos em relação a ∇g em (5).…”
Section: Condições De Otimalidade De 2ª Ordem: Forte E Fracaunclassified
Resumo. Neste trabalho propusemos duas novas condições necessárias de segunda ordem para o problema de programação matemática com restrições de cardinalidade (cuja sigla, em inglês, é MPCaC), referidas como MPCaC-SSONC e MPCaC-WSONC, que são baseadas no conceito de primeira ordem de M-estacionariedade. Também discutimos condições de qualificação (CQ) necessárias para que a M-estacionariedade de segunda ordem seja válida nos minimizadores. Também propusemos uma CQ de posto constante relaxado especializada (MPCaC-RCRCQ) para otimalidade de segunda ordem em MPCaC. Por fim comparamos esses resultados com trabalhos anteriores que utilizaram diferentes linearizações do conjunto viável e conceitos de estacionariedade.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.