2020
DOI: 10.1137/18m1163993
|View full text |Cite
|
Sign up to set email alerts
|

Douglas--Rachford Splitting and ADMM for Nonconvex Optimization: Tight Convergence Results

Abstract: Although originally designed and analyzed for convex problems, the alternating direction method of multipliers (ADMM) and its close relatives, Douglas-Rachford splitting (DRS) and Peaceman-Rachford splitting (PRS), have been observed to perform remarkably well when applied to certain classes of structured nonconvex optimization problems. However, partial global convergence results in the nonconvex setting have only recently emerged. In this paper we show how the Douglas-Rachford envelope (DRE), introduced in 2… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
114
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 83 publications
(115 citation statements)
references
References 34 publications
1
114
0
Order By: Relevance
“…We solve the proximal upper-bound problem through use of distributed optimization control algorithm for 4C (Algorithm 3) and CVXPY [55] (a Python-embedded modeling language for solving convex optimization problems). Furthermore, we compare the solution of our distributed optimization control algorithm with the solution computed via Douglas-Rachford splitting [56] without applying a rounding technique. Thus, our formulated problem in (37) is decomposable.…”
Section: Cache Hit Ratio and Bandwidth-savingmentioning
confidence: 99%
“…We solve the proximal upper-bound problem through use of distributed optimization control algorithm for 4C (Algorithm 3) and CVXPY [55] (a Python-embedded modeling language for solving convex optimization problems). Furthermore, we compare the solution of our distributed optimization control algorithm with the solution computed via Douglas-Rachford splitting [56] without applying a rounding technique. Thus, our formulated problem in (37) is decomposable.…”
Section: Cache Hit Ratio and Bandwidth-savingmentioning
confidence: 99%
“…Li and Pong also provided detailed results on the convergence rates. Andreas Themelis and Panos Patrinos have since published a follow up article [137] in which they relax some of the restrictions on the step size η, as well as providing a discussion of the connections with ADMM.…”
Section: Nonconvex Minimizationmentioning
confidence: 99%
“…Recently, FBS and DRS are found to numerically converge for certain nonconvex problems, for example, FBS for image restoration [22], dictionary learning, and matrix decomposition [25], and DRS for nonconvex feasibility problem [14], matrix completion [1], and phase retrieval [7]. Theoretically, their iterates have been shown to converge to stationary points in some nonconvex settings [2,14,26,11]. In particular, any bounded sequence produced by FBS converges to a stationary point when the objective satisfies the KL property [2]; By using the Douglas-Rachford Envelope (DRE), DRS iterates are shown to converge to a stationary point when one of the two functions is Lipschitz differentiable, both of them are semi-algebraic and bounded below, and one of them is coercive [14,26]; In [11], when one function is strongly convex and the other is weakly convex, and their sum is strongly convex, DRS iterates are shown to be Fejer monotone with respect to the set of fixed points of DRS operator, thus convergent.…”
Section: Introductionmentioning
confidence: 99%
“…Theoretically, their iterates have been shown to converge to stationary points in some nonconvex settings [2,14,26,11]. In particular, any bounded sequence produced by FBS converges to a stationary point when the objective satisfies the KL property [2]; By using the Douglas-Rachford Envelope (DRE), DRS iterates are shown to converge to a stationary point when one of the two functions is Lipschitz differentiable, both of them are semi-algebraic and bounded below, and one of them is coercive [14,26]; In [11], when one function is strongly convex and the other is weakly convex, and their sum is strongly convex, DRS iterates are shown to be Fejer monotone with respect to the set of fixed points of DRS operator, thus convergent. Though unlikely, it is still possible that the limit of a convergent sequence is a saddle point instead of a local minimum (except when all stationary points are local minima, which is the case studied in [11]).…”
Section: Introductionmentioning
confidence: 99%