2020
DOI: 10.1109/tac.2019.2915745
|View full text |Cite
|
Sign up to set email alerts
|

Online Distributed Optimization With Strongly Pseudoconvex-Sum Cost Functions

Abstract: This paper considers the problem of asynchronous distributed multi-agent optimization on server-based system architecture. In this problem, each agent has a local cost, and the goal for the agents is to collectively find a minimum of their aggregate cost. A standard algorithm to solve this problem is the iterative distributed gradientdescent (DGD) method being implemented collaboratively by the server and the agents. In the synchronous setting, the algorithm proceeds from one iteration to the next only after a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 51 publications
(31 citation statements)
references
References 68 publications
1
30
0
Order By: Relevance
“…Additional dynamic regret bounds have also been derived for centralized OCO, e.g., Mokhtari et al (2016); Zhang et al (2017a); Besbes et al (2015). In distributed implementation, several recent works have proposed methods that provide dynamic regret guarantee under various assumptions on the convexity and smoothness of the objective functions (Shahrampour and Jadbabaie (2018); Zhang et al (2019); Dixit et al (2019); Lu et al (2020); Sharma et al (2020); Eshraghi and Liang (2020); ; Li et al (2021)). To the best of our knowledge, for gradient/projection-based distributed OCO, the tightest known dynamic regret bound for general convex cost functions is O( T (1 + P T )) (Shahrampour and Jadbabaie (2018)).…”
Section: Dynamic Regret Of Gradient-based and Projection-based Ocomentioning
confidence: 99%
“…Additional dynamic regret bounds have also been derived for centralized OCO, e.g., Mokhtari et al (2016); Zhang et al (2017a); Besbes et al (2015). In distributed implementation, several recent works have proposed methods that provide dynamic regret guarantee under various assumptions on the convexity and smoothness of the objective functions (Shahrampour and Jadbabaie (2018); Zhang et al (2019); Dixit et al (2019); Lu et al (2020); Sharma et al (2020); Eshraghi and Liang (2020); ; Li et al (2021)). To the best of our knowledge, for gradient/projection-based distributed OCO, the tightest known dynamic regret bound for general convex cost functions is O( T (1 + P T )) (Shahrampour and Jadbabaie (2018)).…”
Section: Dynamic Regret Of Gradient-based and Projection-based Ocomentioning
confidence: 99%
“…However, it is generally impossible to know C T beforehand in practice, so most subsequent works do not require this assumption. A distributed online gradient tracking algorithm is proposed in (Lu et al, 2019), which has a dynamic regret bound of O( √ 1 + C T T 3/4 √ ln T ). The dynamic regret of distributed online proximal gradient descent with an O(log t) communication steps per round is bounded by O(log T (1 + C T )) (Dixit et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Thus, to ensure that such a positive N exists, we must select α such that both (29) and (27a) are satisfied in addition to (31), which requires…”
Section: Convergence Analysismentioning
confidence: 99%