2017
DOI: 10.1109/tcns.2015.2505149
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Online Convex Optimization on Time-Varying Directed Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
119
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(120 citation statements)
references
References 21 publications
1
119
0
Order By: Relevance
“…We consider a network of agents without a central coordination unit that is tasked with solving a global optimization problem in which the objective function is the sum of local costs of the agents, that is, F (x) = fuse and aggregate local information-a process which can be represented through a weight matrix that is in accordance with the network structure. In particular, a common assumption in this line of research is the availability of a doubly stochastic weight matrix (i.e., each row and each column sum up to 1) or a column stochastic one (i.e., each column sums up to 1); see, e.g., [9,17,19,23,24,31,32,38,41] for the former case and [1,29,41,42,46,48] for the latter. Here, the latter case is clearly weaker and, more importantly, allows one to employ the Push-Sum protocol [18] or the technique in [27] in order to asymptotically "balance the graph," thereby achieving exact convergence as a doubly stochastic matrix does in many distributed algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…We consider a network of agents without a central coordination unit that is tasked with solving a global optimization problem in which the objective function is the sum of local costs of the agents, that is, F (x) = fuse and aggregate local information-a process which can be represented through a weight matrix that is in accordance with the network structure. In particular, a common assumption in this line of research is the availability of a doubly stochastic weight matrix (i.e., each row and each column sum up to 1) or a column stochastic one (i.e., each column sums up to 1); see, e.g., [9,17,19,23,24,31,32,38,41] for the former case and [1,29,41,42,46,48] for the latter. Here, the latter case is clearly weaker and, more importantly, allows one to employ the Push-Sum protocol [18] or the technique in [27] in order to asymptotically "balance the graph," thereby achieving exact convergence as a doubly stochastic matrix does in many distributed algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…proper convex extended real-valued function. Let y ⋆ be the minimizer of (5). Suppose that in Algorithm 2 each local step-size α i is chosen such that 0 < α i ≤ 1 Li with…”
Section: Asynchronous Distributed Dual Proximal Gradientmentioning
confidence: 99%
“…2) Unlike most consensus-based approaches in [17], [19]- [23], this algorithm does not require the weighting matrix to be doubly-stochastic, which makes it applicable in any directed graphs, since finding a doubly-stochastic weighting matrix for a directed graph is not a trivial task [34], [35]. In addition, this algorithm is allowed to take any positive and non-increasing step-size as compared to those using diminishing and square summable stepsize [17], [18]. The wider range of step-size selection implies a wider range of stability.…”
Section: Introductionmentioning
confidence: 99%