2015 54th IEEE Conference on Decision and Control (CDC) 2015
DOI: 10.1109/cdc.2015.7402509
|View full text |Cite
|
Sign up to set email alerts
|

Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes

Abstract: We consider distributed optimization problems in which a number of agents are to seek the optimum of a global objective function through merely local information sharing. The problem arises in various application domains, such as resource allocation, sensor fusion and distributed learning. In particular, we are interested in scenarios where agents use uncoordinated (different) constant stepsizes for local optimization. According to most existing works, using this kind of stepsize rule for update, which is nece… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
293
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 270 publications
(294 citation statements)
references
References 31 publications
1
293
0
Order By: Relevance
“…Recently, gradient tracking has been proposed where the local gradient at each agent is replaced by the estimate of the global gradient [12]- [15]. Methods for directed graphs that are based on gradient tracking [14]- [21] rely on separate iterations for eigenvector estimation that may impede the convergence.…”
mentioning
confidence: 99%
“…Recently, gradient tracking has been proposed where the local gradient at each agent is replaced by the estimate of the global gradient [12]- [15]. Methods for directed graphs that are based on gradient tracking [14]- [21] rely on separate iterations for eigenvector estimation that may impede the convergence.…”
mentioning
confidence: 99%
“…Distributed Optimization: The AB algorithm When the objective functions are not available at a central location, distributed solutions are required to solve Problem P1. Most existing work [1]- [3], [11]- [14], [18]- [20] is restricted to undirected graphs, since the weights assigned to neighboring agents must be doubly-stochastic. The work on directed graphs [21], [22], [25]- [28] is largely based on pushsum consensus [29], [30] that requires eigenvector learning.…”
Section: A Centralized Optimization: Nesterov's Methodsmentioning
confidence: 99%
“…It is shown in [33] that AB converges linearly to the optimal solution for the function class F 1,1 µ,L . The AB algorithm for undirected graphs where both weights are doubly-stochastic was studied earlier in [18], [19], [26]. It is shown in [19] that the oracle complexity with doublystochastic weights is O(Q 2 log 1 ).…”
Section: A Centralized Optimization: Nesterov's Methodsmentioning
confidence: 99%
“…More recently, the idea of gradient tracking has been independently proposed by several research groups. In [18,19] the authors consider constrained nonsmooth and nonconvex problems, while in [20,21] strongly convex, unconstrained, smooth optimization problems are addressed. Works [22,23] extend the algorithms to (possibly) time-varying digraphs (still in a nonconvex setting).…”
Section: Algorithm 2 Gradient Trackingmentioning
confidence: 99%