Abstract:Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance.In the first part of this paper, we analyze the convergence rates of several general splitting algorithms and pr… Show more
“…In this case, ADMM is known to converge under very mild conditions; see [7] and [8]. Under the same conditions, several recent works [20][21][22] have shown that the ADMM converges with the sublinear rate of O( …”
Abstract. The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm, and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules.
“…In this case, ADMM is known to converge under very mild conditions; see [7] and [8]. Under the same conditions, several recent works [20][21][22] have shown that the ADMM converges with the sublinear rate of O( …”
Abstract. The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm, and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules.
“…Moreover, it was shown in [26] that the sequence x t − x * is nonincreasing and that Sx t − x t 2 = o(1/t), assuming only that the sequence τ t = ρ t (1 − ρ t ) is bounded away from 0. Conveniently, compositions of averaged operators are easily seen to be averaged.…”
Section: Forward and Backward Stepsmentioning
confidence: 99%
“…On the other hand, most users of (DRA) have fixed γ = 1 to focus on the estimation of the scaling parameter λ as seen below. More insight on relaxed versions of (FB), (DRA) and (PRA) and their theoretical rates of convergence may be found in the recent study by Davis and Yin and companion papers [26].…”
Section: Algorithmic Enhancementsmentioning
confidence: 99%
“…Some results may look rather frustrating as commented in a recent study [26] : the global convergence of Douglas-Rachford splitting scheme can be as fast as the Proximal iteration in the ergodic sense and as slow as a subgradient method in the non ergodic sense.…”
To cite this version:Philippe Mahey, Arnaud Lenoir. A survey on operator splitting and decomposition of convex programs. RAIRO -Operations Research, EDP Sciences, 2017, 51 (1) Abstract Many structured convex minimization problems can be modeled by the search of a zero of the sum of two monotone operators. Operator splitting methods have been designed to decompose and regularize at the same time these kind of models. We review here these models and the classical splitting methods. We focus on the numerical sensitivity of these algorithms with respect to the scaling parameters that drive the regularizing terms, in order to accelerate convergence rates for different classes of models.
“…However, it was shown in [6] that the scheme (1.6) is not necessarily convergent. The convergence rate of ADMM and its extension are analysed in [9,28,30,33,32].…”
Abstract. We consider a separable convex minimization model whose variables are coupled by linear constraints and they are subject to the positive orthant constraints, and its objective function is in form of m functions without coupled variables. It is well recognized that when the augmented Lagrangian method (ALM) is applied to solve some concrete applications, the resulting subproblem at each iteration should be decomposed to generate solvable subproblems. When the Gauss-Seidel decomposition is implemented, this idea has inspired the alternating direction method of multiplier (for m = 2) and its variants (for m ≥ 3). When the Jacobian decomposition is considered, it has been shown that the ALM with Jacobian decomposition in its subproblem is not necessarily convergent even when m = 2 and it was suggested to regularize the decomposed subproblems with quadratic proximal terms to ensure the convergence. In this paper, we focus on the multiple-block case with m ≥ 3. We consider implementing the full Jacobian decomposition to ALM's subproblems and using the logarithmic-quadratic proximal (LQP) terms to regularize the decomposed subproblems. The resulting subproblems are all unconstrained minimization problems because the positive orthant constraints are all inactive; and they are fully eligible for parallel computation. Accordingly, the ALM with full Jacobian decomposition and LQP regularization is proposed. We also consider its inexact version which allows the subproblems to be solved inexactly. For both the exact and inexact versions, we comprehensively discuss their convergence, including their global convergence, worst-case convergence rates measured by the iteration-complexity in both the ergodic and nonergodic senses, and linear convergence rates under additional assumptions. Some preliminary numerical results are reported to demonstrate the efficiency of the ALM with full Jacobian decomposition and LQP regularization.2010 Mathematics Subject Classification. 90C25, 90C33, 65K05.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.