“…We recover then the scaled version of PIM proposed by Spingarn in [13]. It is mentioned in [6] that the performance of PIM is very sensitive to the scaling factor variations and we give an explanation of this fact, allowing its adjustment to an optimal value in the strongly monotone case.…”
Section: Then (Xk+l Yk+l ) = (( X� )A (Yd B )mentioning
We present an algorithm to solve: Find (x,y) E A X A.L such that y E Tx, where A is a subspace and T is a maximal monotone operator. The algorithm is based on the proximal decomposition on the graph of a monotone operator and we show how to recover Spingarn's decom position method. We give a proof of convergence that does not use the concept of partial inverse and show how to choose a scaling factor to accelerate the convergence in the strongly monotone case. Numerical results performed on quadratic problems confirm the robust behaviour of the algorithm.
“…We recover then the scaled version of PIM proposed by Spingarn in [13]. It is mentioned in [6] that the performance of PIM is very sensitive to the scaling factor variations and we give an explanation of this fact, allowing its adjustment to an optimal value in the strongly monotone case.…”
Section: Then (Xk+l Yk+l ) = (( X� )A (Yd B )mentioning
We present an algorithm to solve: Find (x,y) E A X A.L such that y E Tx, where A is a subspace and T is a maximal monotone operator. The algorithm is based on the proximal decomposition on the graph of a monotone operator and we show how to recover Spingarn's decom position method. We give a proof of convergence that does not use the concept of partial inverse and show how to choose a scaling factor to accelerate the convergence in the strongly monotone case. Numerical results performed on quadratic problems confirm the robust behaviour of the algorithm.
“…Besides these applications, the report of Bensoussan et al [6] and more recently, the textbook by Bertsekas and Tsitsiklis [8] have largely contributed to disseminate these techniques to the areas of Mathematical Programming and Operations Research where decomposition techniques are very popular since the sixties. Among many different areas of applications, we can cite Multicommodity Flow problems with convex arc costs ( [61], [37]), Stochastic Programming (adapting (DRA) to two-stage stochastic optimization with recourse leads to the Progressive Hedging method of Rockafellar and Wets [74]), Fermat-Weber problems (the Partial Inverse of Spingarn was applied to a polyhedral operator splitting model in [49]). More recently, new models received a lot of interest in the areas of Image Reconstruction and Signal Processing ( [14,24]), with similar models in Classification problems [43,10].…”
Section: Convergence Results and Complexity Issuesmentioning
To cite this version:Philippe Mahey, Arnaud Lenoir. A survey on operator splitting and decomposition of convex programs. RAIRO -Operations Research, EDP Sciences, 2017, 51 (1) Abstract Many structured convex minimization problems can be modeled by the search of a zero of the sum of two monotone operators. Operator splitting methods have been designed to decompose and regularize at the same time these kind of models. We review here these models and the classical splitting methods. We focus on the numerical sensitivity of these algorithms with respect to the scaling parameters that drive the regularizing terms, in order to accelerate convergence rates for different classes of models.
“…28.2]. This algorithm has many applications in convex optimization, e.g., [68,75,76,111,112,113]. It also constitutes the basic building block of the progressive hedging algorithm in stochastic programming [108].…”
Several aspects of the interplay between monotone operator theory and convex optimization are presented. The crucial role played by monotone operators in the analysis and the numerical solution of convex minimization problems is emphasized. We review the properties of subdifferentials as maximally monotone operators and, in tandem, investigate those of proximity operators as resolvents. In particular, we study new transformations which map proximity operators to proximity operators, and establish connections with self-dual classes of firmly nonexpansive operators. In addition, new insights and developments are proposed on the algorithmic front.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.