Abstract:In the present paper, we prove that the augmented Lagrangian method converges to KKT points under the quasinormality constraint qualification, which is associated with the external penalty theory. An interesting consequence is that the Lagrange multipliers estimates computed by the method remain bounded in the presence of the quasinormality condition. In order to establish a more general convergence result, a new sequential optimality condition for smooth constrained optimization, called PAKKT, is defined. The… Show more
“…The recent paper [2] shows global convergence under QN. Since QN is independent of CCP, the results from [6] and from [2] give different global convergence results, while [2] and [6] generalize the original global convergence proof under CPLD in [1]. In a similar fashion, we will prove global convergence of an augmented Lagrangian method for GNEPs under CCP-GNEP (in Corollary 6.1) or QN-GNEP (in Theorem 6.3).…”
Section: Definition 43 (Qn-gnep)mentioning
confidence: 99%
“…In the augmented Lagrangian literature of optimization, global convergence has been proved under CPLD in [1], with improvements in [4,5], and more recently, under CCP in [6]. The recent paper [2] shows global convergence under QN. Since QN is independent of CCP, the results from [6] and from [2] give different global convergence results, while [2] and [6] generalize the original global convergence proof under CPLD in [1].…”
Section: Definition 43 (Qn-gnep)mentioning
confidence: 99%
“…The main differences of our approaches are that we focus on optimality conditions and CQs that are associated with the proposed method, in particular, our global convergence proof is based on the CCP constraint qualification, while the one in [28] is based on the (stronger) CPLD. We also present a convergence result based on the QN constraint qualification, which extends [2] from optimization to GNEPs, but proving in addition that the dual sequence is bounded. A main contribution of this paper is that AKKT is not an optimality condition for a general GNEP.…”
Generalized Nash Equilibrium Problems (GNEPs) are a generalization of the classic Nash Equilibrium Problems (NEPs), where each player's strategy set depends on the choices of the other players. In this work we study constraint qualifications and optimality conditions tailored for GNEPs and we discuss their relations and implications for global convergence of algorithms. Surprisingly, differently from the case of nonlinear programming, we show that, in general, the KKT residual can not be made arbitrarily small near a solution of a GNEP. We then discuss some important practical consequences of this fact. We also prove that this phenomenon is not present in an important class of GNEPs, including NEPs. Finally, under a weak constraint qualification introduced, we prove global convergence to a KKT point of an Augmented Lagrangian algorithm for GNEPs and under the quasinormality constraint qualification for GNEPs, we prove boundedness of the dual sequence.
“…The recent paper [2] shows global convergence under QN. Since QN is independent of CCP, the results from [6] and from [2] give different global convergence results, while [2] and [6] generalize the original global convergence proof under CPLD in [1]. In a similar fashion, we will prove global convergence of an augmented Lagrangian method for GNEPs under CCP-GNEP (in Corollary 6.1) or QN-GNEP (in Theorem 6.3).…”
Section: Definition 43 (Qn-gnep)mentioning
confidence: 99%
“…In the augmented Lagrangian literature of optimization, global convergence has been proved under CPLD in [1], with improvements in [4,5], and more recently, under CCP in [6]. The recent paper [2] shows global convergence under QN. Since QN is independent of CCP, the results from [6] and from [2] give different global convergence results, while [2] and [6] generalize the original global convergence proof under CPLD in [1].…”
Section: Definition 43 (Qn-gnep)mentioning
confidence: 99%
“…The main differences of our approaches are that we focus on optimality conditions and CQs that are associated with the proposed method, in particular, our global convergence proof is based on the CCP constraint qualification, while the one in [28] is based on the (stronger) CPLD. We also present a convergence result based on the QN constraint qualification, which extends [2] from optimization to GNEPs, but proving in addition that the dual sequence is bounded. A main contribution of this paper is that AKKT is not an optimality condition for a general GNEP.…”
Generalized Nash Equilibrium Problems (GNEPs) are a generalization of the classic Nash Equilibrium Problems (NEPs), where each player's strategy set depends on the choices of the other players. In this work we study constraint qualifications and optimality conditions tailored for GNEPs and we discuss their relations and implications for global convergence of algorithms. Surprisingly, differently from the case of nonlinear programming, we show that, in general, the KKT residual can not be made arbitrarily small near a solution of a GNEP. We then discuss some important practical consequences of this fact. We also prove that this phenomenon is not present in an important class of GNEPs, including NEPs. Finally, under a weak constraint qualification introduced, we prove global convergence to a KKT point of an Augmented Lagrangian algorithm for GNEPs and under the quasinormality constraint qualification for GNEPs, we prove boundedness of the dual sequence.
“…Esta propriedade torna as condições sequenciais de otimalidade ferramentas úteis para fornecer naturalmente uma condição de otimalidade perturbada, que é adequada para a definição de critérios de parada e análise de complexidade para vários algoritmos. Além disso, um estudo cuidadoso da relação das condições sequenciais de otimalidade com as medidas clássicas de estacionaridade sob uma condição de qualificação, pode produzir resultados de convergência global sob hipóteses fracas, como podemos ver em [AFSS19,AMRS16,1.3 AMRS18].…”
Section: Capítulounclassified
“…This property makes sequential optimality conditions useful tools for naturally providing a perturbed optimality condition, which is suitable for the definition of stopping criteria and complexity analysis for several algorithms. Also, a careful study of the relation of sequential optimality conditions with classical stationarity measures under a constraint qualification, yields global convergence results under weak assumptions [AFSS19,AMRS16,AMRS18].…”
In this work, we perform an extension of the so-called Approximate Karush-Kuhn-Tucker (AKKT) condition, initially introduced in nonlinear programming [AHM11], for nonlinear symmetric cone programming. A new condition, which we call Trace AKKT (TAKKT), was also presented for the nonlinear semidefinite programming problem. TAKKT proved to be more practical than AKKT for nonlinear semidefinite programming. We prove that both the AKKT condition and the TAKKT condition are optimality conditions. Results of global convergence for the augmented Lagrangian method were obtained. Strict qualification conditions were introduced to measure the strength of the overall convergence results presented. Through these strict qualification conditions, it was possible to verify that our results of global convergence proved to be better than those known in the literature. We also present a proof for a particular case of the conjecture made in [AMS07].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.