2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP) 2019
DOI: 10.1109/camsap45676.2019.9022451
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Contraction Analysis of a Class of Distributed Algorithms for Composite Optimization

Abstract: We study distributed composite optimization over networks: agents minimize the sum of a smooth (strongly) convex functionthe agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Our results unify several approaches proposed in the literature of distributed optimization for special instances of our formulation. Distinguishing features of ou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Note that to be used, A must be returned as nonsingular. More details of Chebyshev acceleration applied to the ABC-Algorithm along with some numerical results can be found in [1], [2].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that to be used, A must be returned as nonsingular. More details of Chebyshev acceleration applied to the ABC-Algorithm along with some numerical results can be found in [1], [2].…”
Section: Discussionmentioning
confidence: 99%
“…The results of this work have been partially presented in [1]. While preparing the final version of this manuscript, we noticed the arxiv submission [26], which is an independent and parallel work (cf.…”
Section: Introductionmentioning
confidence: 98%
“…The aforementioned algorithms were originally made available with singlenode implementations, which may be suboptimal or even unsuitable, when dealing with massive datasets. Therefore, various asynchronous or distributed extensions have been proposed [16,27,18,28,29], where each term is handled independently by a processing unit and the convergence toward an aggregate solution to the optimization problem is ensured via a suitable communication strategy between those processing units. However, the convergence analysis of primal-dual distributed algorithms is usually based on fixed-point theory, that require specific probabilistic assumptions on the block update rule.…”
Section: Introductionmentioning
confidence: 99%
“…This raises challenging questions, in terms of convergence analysis, as the communication delays may introduce instabilities. A plethora of recent works have focused on proposing distributed optimization algorithms with assessed convergence, based on stochastic proximal primal [13,14] or primal-dual [15,16,17,18,19,20] techniques. However, as they rely on the formulation of dual instances of a stochastic coordinate descent strategy, those algorithms are limited to convex (sometimes even strongly convex) optimization and often require specific probabilistic assumptions on the block update rule difficult to meet in practice.…”
Section: Introductionmentioning
confidence: 99%