We propose two numerical algorithms in the fully nonconvex setting for the minimization of the sum of a smooth function and the composition of a nonsmooth function with a linear operator. The iterative schemes are formulated in the spirit of the proximal alternating direction method of multipliers and its linearized variant, respectively. The proximal terms are introduced via variable metrics, a fact that allows us to derive new proximal splitting algorithms for nonconvex structured optimization problems, as particular instances of the general schemes. Under mild conditions on the sequence of variable metrics and by assuming that a regularization of the associated augmented Lagrangian has the Kurdyka–Łojasiewicz property, we prove that the iterates converge to a Karush–Kuhn–Tucker point of the objective function. By assuming that the augmented Lagrangian has the Łojasiewicz property, we also derive convergence rates for both the augmented Lagrangian and the iterates.
We aim to factorize a completely positive matrix by using an optimization approach which consists in the minimization of a nonconvex smooth function over a convex and compact set. To solve this problem we propose a projected gradient algorithm with parameters that take into account the effects of relaxation and inertia. Both projection and gradient steps are simple in the sense that they have explicit formulas and do not require inner loops. Furthermore, no expensive procedure to find an appropriate starting point is needed. The convergence analysis shows that the whole sequence of generated iterates converges to a critical point of the objective function and it makes use of the Łojasiewicz inequality. Its rate of convergence expressed in terms of the Łojasiewicz exponent of a regularization of the objective function is also provided. Numerical experiments demonstrate the efficiency of the proposed method, in particular in comparison to other factorization algorithms, and emphasize the role of the relaxation and inertial parameters.
We propose a proximal algorithm for minimizing objective functions consisting of three summands: the composition of a nonsmooth function with a linear operator, another nonsmooth function, each of the nonsmooth summands depending on an independent block variable, and a smooth function which couples the two block variables. The algorithm is a full splitting method, which means that the nonsmooth functions are processed via their proximal operators, the smooth function via gradient steps, and the linear operator via matrix times vector multiplication. We provide sufficient conditions for the boundedness of the generated sequence and prove that any cluster point of the latter is a KKT point of the minimization problem. In the setting of the Kurdyka-Lojasiewicz property we show global convergence, and derive convergence rates for the iterates in terms of the Lojasiewicz exponent.
This work aims to minimize a continuously differentiable convex function with Lipschitz continuous gradient under linear equality constraints. The proposed inertial algorithm results from the discretization of the second-order primal-dual dynamical system with asymptotically vanishing damping term addressed by Boţ and Nguyen (J. Differential Equations 303:369–406, 2021), and it is formulated in terms of the Augmented Lagrangian associated with the minimization problem. The general setting we consider for the inertial parameters covers the three classical rules by Nesterov, Chambolle–Dossal and Attouch–Cabot used in the literature to formulate fast gradient methods. For these rules, we obtain in the convex regime convergence rates of order $${\mathcal {O}}\left( 1/k^{2} \right) $$
O
1
/
k
2
for the primal-dual gap, the feasibility measure, and the objective function value. In addition, we prove that the generated sequence of primal-dual iterates converges to a primal-dual solution in a general setting that covers the two latter rules. This is the first result which provides the convergence of the sequence of iterates generated by a fast algorithm for linearly constrained convex optimization problems without additional assumptions such as strong convexity. We also emphasize that all convergence results of this paper are compatible with the ones obtained in Boţ and Nguyen (J. Differential Equations 303:369–406, 2021) in the continuous setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.