Abstract:Mathematical Programs with Complementarity Constraints (MPCCs) are difficult optimization problems that do not satisfy the majority of the usual constraint qualifications (CQs) for standard nonlinear optimization. Despite this fact, classical methods behaves well when applied to MPCCs. Recently, Izmailov, Solodov and Uskov proved that first order augmented Lagrangian methods, under a natural adaption of the Linear Independence Constraint Qualification to the MPCC setting (MPCC-LICQ), converge to strongly stati… Show more
“…Problem (3) belongs to the class of Mathematical Programs with Complementarity Constraints (MPCC). We make further comments about these problems in section 4.1, revisiting and extending previously known results about the convergence of augmented Lagrangian methods [10,23]. Furthermore, we prove that the sequence of Lagrange multipliers estimates generated by the method applied to the general problem (P) is bounded whenever the quasinormality condition holds at the accumulation point.…”
Section: Introductionsupporting
confidence: 54%
“…This is a direct consequence of Lemma 4.1, and Theorems 2.2 and 3.3. Theorem 4.2 extends Theorem 3.2 of [23] (see also [10]) in the case where the lower level strict complementarity holds. This previous result deals exclusively with augmented Lagrangian methods and was obtained assuming MPCC-LICQ, a much more stringent condition than MPCC-quasinormality.…”
Section: Strength Of the Pakkt Condition: Akkt Vs Pakkt Methodsmentioning
confidence: 54%
“…It is worth noticing that, although we make some progress, we do not intend to extend existing convergence results for MPCC. In fact, it is known that this type of result was obtained for specific methods, in particular Algorithm 1 [10,23]. On the other hand, our approach focus on the sequential optimality conditions, which do not depend on the method considered.…”
Section: Strength Of the Pakkt Condition: Akkt Vs Pakkt Methodsmentioning
In the present paper, we prove that the augmented Lagrangian method converges to KKT points under the quasinormality constraint qualification, which is associated with the external penalty theory. An interesting consequence is that the Lagrange multipliers estimates computed by the method remain bounded in the presence of the quasinormality condition. In order to establish a more general convergence result, a new sequential optimality condition for smooth constrained optimization, called PAKKT, is defined. The new condition takes into account the sign of the dual sequence, constituting an adequate sequential counterpart to the (enhanced) Fritz-John necessary optimality conditions proposed by Hestenes, and later extensively treated by Bertsekas. PAKKT points are substantially better than points obtained by the classical Approximate KKT (AKKT) condition, which has been used to establish theoretical convergence results for several methods. In particular, we present a simple problem with complementarity constraints such that all its feasible points are AKKT, while only the solutions and a pathological point are PAKKT. This shows the efficiency of the methods that reach PAKKT points, particularly the augmented Lagrangian algorithm, in such problems. We also provided the appropriate strict constraint qualification associated with the PAKKT sequential optimality condition, called PAKKT-regular, and we prove that it is strictly weaker than both quasinormality and cone continuity property. PAKKT-regular connects both branches of these independent constraint qualifications, generalizing all previous theoretical convergence results for the augmented Lagrangian method in the literature.where f : R n → R and X is the feasible set composed of equality and inequality constraints of the form X = {x | h(x) = 0, g(x) ≤ 0}, * This work has been partially supported by CEPID-CeMEAI (FAPESP 2013/07375-0), FAPESP (Grant 2013/05475-7) and CNPq (Grant 303013/2013-3).
“…Problem (3) belongs to the class of Mathematical Programs with Complementarity Constraints (MPCC). We make further comments about these problems in section 4.1, revisiting and extending previously known results about the convergence of augmented Lagrangian methods [10,23]. Furthermore, we prove that the sequence of Lagrange multipliers estimates generated by the method applied to the general problem (P) is bounded whenever the quasinormality condition holds at the accumulation point.…”
Section: Introductionsupporting
confidence: 54%
“…This is a direct consequence of Lemma 4.1, and Theorems 2.2 and 3.3. Theorem 4.2 extends Theorem 3.2 of [23] (see also [10]) in the case where the lower level strict complementarity holds. This previous result deals exclusively with augmented Lagrangian methods and was obtained assuming MPCC-LICQ, a much more stringent condition than MPCC-quasinormality.…”
Section: Strength Of the Pakkt Condition: Akkt Vs Pakkt Methodsmentioning
confidence: 54%
“…It is worth noticing that, although we make some progress, we do not intend to extend existing convergence results for MPCC. In fact, it is known that this type of result was obtained for specific methods, in particular Algorithm 1 [10,23]. On the other hand, our approach focus on the sequential optimality conditions, which do not depend on the method considered.…”
Section: Strength Of the Pakkt Condition: Akkt Vs Pakkt Methodsmentioning
In the present paper, we prove that the augmented Lagrangian method converges to KKT points under the quasinormality constraint qualification, which is associated with the external penalty theory. An interesting consequence is that the Lagrange multipliers estimates computed by the method remain bounded in the presence of the quasinormality condition. In order to establish a more general convergence result, a new sequential optimality condition for smooth constrained optimization, called PAKKT, is defined. The new condition takes into account the sign of the dual sequence, constituting an adequate sequential counterpart to the (enhanced) Fritz-John necessary optimality conditions proposed by Hestenes, and later extensively treated by Bertsekas. PAKKT points are substantially better than points obtained by the classical Approximate KKT (AKKT) condition, which has been used to establish theoretical convergence results for several methods. In particular, we present a simple problem with complementarity constraints such that all its feasible points are AKKT, while only the solutions and a pathological point are PAKKT. This shows the efficiency of the methods that reach PAKKT points, particularly the augmented Lagrangian algorithm, in such problems. We also provided the appropriate strict constraint qualification associated with the PAKKT sequential optimality condition, called PAKKT-regular, and we prove that it is strictly weaker than both quasinormality and cone continuity property. PAKKT-regular connects both branches of these independent constraint qualifications, generalizing all previous theoretical convergence results for the augmented Lagrangian method in the literature.where f : R n → R and X is the feasible set composed of equality and inequality constraints of the form X = {x | h(x) = 0, g(x) ≤ 0}, * This work has been partially supported by CEPID-CeMEAI (FAPESP 2013/07375-0), FAPESP (Grant 2013/05475-7) and CNPq (Grant 303013/2013-3).
“…Optimization problem with equality constraints can be solved by using Lagrange multiplier and the one with inequality constraints can be solved by exploiting Lagrange multiplier and Karush-Kuhn-Tucker (KTT) conditions which are necessary and sufficient condition when the model is convex and determine whether the solution obtained by Lagrange multiplier method is optimal [27]. The general form of constrained optimization model is represented by Eq.8, the objective function and the constraint function are differentiable in Eq.8 [28].…”
The application of reinforcement learning in industrial fields makes the safety problem of the agent a research hotspot. Traditional methods mainly alter the objective function and the exploration process of the agent to address the safety problem. Those methods, however, can hardly prevent the agent from falling into dangerous states because most of the methods ignore the damage caused by unsafe states. As a result, most solutions are not satisfactory. In order to solve the aforementioned problem, we come forward with a safe Q-learning method that is based on constrained Markov decision processes, adding safety constraints as prerequisites to the model, which improves standard Q-learning algorithm so that the proposed algorithm seeks for the optimal solution ensuring that the safety premise is satisfied. During the process of finding the solution in form of the optimal state-action value, the feasible space of the agent is limited to the safe space that guarantees the safety via the feasible space being filtered by constraints added to the action space. Because the traditional solution methods are not applicable to the safe Q-learning model as they tend to obtain local optimal solution, we take advantage of the Lagrange multiplier method to solve the optimal action that can be performed in the current state based on the premise of linearizing constraint functions, which not only improves the efficiency and accuracy of the algorithm, but also guarantees to obtain the global optimal solution. The experiments verify the effectiveness of the algorithm. INDEX TERMS Constrained Markov decision processes, safe reinforcement learning, Q-learning, constraint, Lagrange multiplier.
“…The proposed model adopts diffusion terms as the regularization term and therefore, can obtain a higher quality of segmentation results. Instead of solving high order nonlinear PDEs, alternating direction method of multipliers (ADMM) [28,16,27] (i. e. augmented Lagrangian method (ALM) [2,6,35]) is applied to transform the energy minimization problem of proposed model into three subproblems, which are then solved by fast Fourier transform (FFT) [14], projection formula [27], analytical soft thresholding equation [26,35] and threshold method [25,35]. Moreover, we creatively propose a new fast algorithm (NVPM) based on normal vector projection and alternating optimization method to solve our model.…”
In this paper, a new variational model is proposed for image segmentation based on active contours, nonlinear diffusion and level sets. It includes a Chan-Vese model-based data fitting term and a regularized term that uses the potential functions (PF) of nonlinear diffusion. The former term can segment the image by region partition instead of having to rely on the edge information. The latter term is capable of automatically preserving image edges as well as smoothing noisy regions. To improve computational efficiency, the implementation of the proposed model does not directly solve the high order nonlinear partial differential equations and instead exploit the efficient alternating direction method of multipliers (ADMM), which allows the use of fast Fourier transform (FFT), analytical generalized soft thresholding equation, and projection formula. In particular, we creatively propose a new fast algorithm, normal vector projection method (NVPM), based on alternating optimization method and normal vector projection. Its stability can be the same as ADMM and it has faster convergence ability. Extensive numerical experiments on grey and colour images validate the effectiveness of the proposed model and the efficiency of the algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.