The forward-backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forwardbackward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE and its gradient can be computed by simply evaluating forward-backward steps, the resulting methods rely on the very same blackbox oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka-Lojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.
We propose ZeroFPR, a nonmonotone linesearch algorithm for minimizing the sum of two nonconvex functions, one of which is smooth and the other possibly nonsmooth. ZeroFPR is the first algorithm that, despite being fit for fully nonconvex problems and requiring only the black-box oracle of forward-backward splitting (FBS) -namely evaluations of the gradient of the smooth term and of the proximity operator of the nonsmooth one -achieves superlinear convergence rates under mild assumptions at the limit point when the linesearch directions satisfy a Dennis-Moré condition, and we show that this is the case for quasi-Newton directions. Our approach is based on the forward-backward envelope (FBE), an exact and strictly continuous penalty function for the original cost. Extending previous results we show that, despite being nonsmooth for fully nonconvex problems, the FBE still enjoys favorable first-and second-order properties which are key for the convergence results of ZeroFPR. Our theoretical results are backed up by promising numerical simulations. On large-scale problems, by computing linesearch directions using limited-memory quasi-Newton updates our algorithm greatly outperforms FBS and its accelerated variant (AFBS).
Although originally designed and analyzed for convex problems, the alternating direction method of multipliers (ADMM) and its close relatives, Douglas-Rachford splitting (DRS) and Peaceman-Rachford splitting (PRS), have been observed to perform remarkably well when applied to certain classes of structured nonconvex optimization problems. However, partial global convergence results in the nonconvex setting have only recently emerged. In this paper we show how the Douglas-Rachford envelope (DRE), introduced in 2014, can be employed to unify and considerably simplify the theory for devising global convergence guarantees for ADMM, DRS and PRS applied to nonconvex problems under less restrictive conditions, larger prox-stepsizes and over-relaxation parameters than previously known. In fact, our bounds are tight whenever the over-relaxation parameter ranges in (0, 2]. The analysis of ADMM uses a universal primal equivalence with DRS that generalizes the known duality of the algorithms. (DRS)The case λ = 1 corresponds to the classical DRS, whereas for λ = 2 the scheme is also known as Peaceman-Rachford splitting (PRS). If s is a fixed point for the DR-iterationthat is, such that s + = s -then it can be easily seen that u satisfies the first-order necessary condition for optimality in problem (1.1). When both ϕ 1 and ϕ 2 are convex functions, the
We present PANOC, a new algorithm for solving optimal control problems arising in nonlinear model predictive control (NMPC). A usual approach to this type of problems is sequential quadratic programming (SQP), which requires the solution of a quadratic program at every iteration and, consequently, inner iterative procedures. As a result, when the problem is ill-conditioned or the prediction horizon is large, each outer iteration becomes computationally very expensive. We propose a line-search algorithm that combines forwardbackward iterations (FB) and Newton-type steps over the recently introduced forward-backward envelope (FBE), a continuous, real-valued, exact merit function for the original problem. The curvature information of Newton-type methods enables asymptotic superlinear rates under mild assumptions at the limit point, and the proposed algorithm is based on very simple operations: access to first-order information of the cost and dynamics and low-cost direct linear algebra. No inner iterative procedure nor Hessian evaluation is required, making our approach computationally simpler than SQP methods. The lowmemory requirements and simple implementation make our method particularly suited for embedded NMPC applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.