2017
DOI: 10.1007/s10915-017-0628-z
|View full text |Cite
|
Sign up to set email alerts
|

On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms

Abstract: The need for scalable solvers for massive optimization problems has motivated the development of asynchronous-parallel algorithms, where a set of nodes runs in parallel with little or no synchronization, thus computing with delayed information. This paper develops powerful Lyapunov-functions techniques, and uses them to study the convergence of the asynchronous-parallel algorithm ARock under potentially unbounded delays.ARock is a very general asynchronous algorithm, that takes many popular algorithms as speci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
60
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 43 publications
(61 citation statements)
references
References 22 publications
1
60
0
Order By: Relevance
“…Therefore Equations (11) and (12) can be derived from (26) and (27) using the particular structure of the problem, proving Proposition 1.…”
Section: Appendix I Proof Of Propositionmentioning
confidence: 64%
See 2 more Smart Citations
“…Therefore Equations (11) and (12) can be derived from (26) and (27) using the particular structure of the problem, proving Proposition 1.…”
Section: Appendix I Proof Of Propositionmentioning
confidence: 64%
“…Notice now that the trajectory k → x(k) generated by (26) is equivalent to that generated by (19) if the initial condition for x is the same and if z(0) = w(0)+ρy(0) since Equation (21) has to hold at time k = 0. Therefore Propositon 1 is proved if we can show that (26) and (27) can be rewritten as (11) and (12).…”
Section: Appendix I Proof Of Propositionmentioning
confidence: 96%
See 1 more Smart Citation
“…The previous proposition naturally suggests an alternative distributed implementation of the R-ADMM Algorithm 1, in which each node i stores in its local memory the variables x i and z ij , j ∈ N i . Then, at each iteration of the algorithm, each node i first collects the variables z ji , j ∈ N i ; second, updates x i and z ij according to (25) and the first of (26), respectively; finally, it sends z ij to j ∈ N i . Differently to the natural implementation just briefly described, we present a slightly different implementation building upon the observation that each node i, to update x i as in (25) requires the variables z ji rather than z ij for j ∈ N i .…”
Section: Remarkmentioning
confidence: 99%
“…Indeed, the classical formulation of the ADMM naturally arises as application of the DRS to the Lagrange dual problem of the original optimization problem [23]. For further details on a variety of splitting operators and their application in asynchronous setups we refer to [24] and [25], respectively. In this paper we present and analyze different formulation for the ADMM algorithm.…”
Section: Introductionmentioning
confidence: 99%