2017 IEEE 56th Annual Conference on Decision and Control (CDC) 2017
DOI: 10.1109/cdc.2017.8264076
|View full text |Cite
|
Sign up to set email alerts
|

Superlinearly convergent asynchronous distributed network newton method

Abstract: The problem of minimizing a sum of local convex objective functions over a networked system captures many important applications and has received much attention in the distributed optimization field. Most of existing work focuses on development of fast distributed algorithms under the presence of a central clock. The only known algorithms with convergence guarantees for this problem in asynchronous setup could achieve either sublinear rate under totally asynchronous setting or linear rate under partially async… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
40
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 32 publications
(40 citation statements)
references
References 31 publications
0
40
0
Order By: Relevance
“…Common approaches include first order methods [6]- [10], primal-dual algorithms [11], [12], gradient tracking methods *This work was supported by the DARPA award HR-001117S0039. C. Iakovidou and E. Wei are with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL USA, chariako@u.northwestern.edu, ermin.wei@northwestern.edu [13]- [16], the alternating direction method of multipliers (ADMM) [17] and Newton methods [18], [19].…”
Section: Introductionmentioning
confidence: 99%
“…Common approaches include first order methods [6]- [10], primal-dual algorithms [11], [12], gradient tracking methods *This work was supported by the DARPA award HR-001117S0039. C. Iakovidou and E. Wei are with the Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL USA, chariako@u.northwestern.edu, ermin.wei@northwestern.edu [13]- [16], the alternating direction method of multipliers (ADMM) [17] and Newton methods [18], [19].…”
Section: Introductionmentioning
confidence: 99%
“…Rather, we assume the agents to be active based on a time invariant and not necessarily uniform probability distribution. We have studied the setting with equal activation probabilities in our previous work in [27], which is a special case of the setting in this paper.…”
Section: B Our Contributionmentioning
confidence: 99%
“…In this case, there is no need to scale the agents' stepsizes with the inverse of their activation probabilities, i.e., in step 5 of Algorithm 1, the active agent use ε instead of ε p i . This case is studied in [27] and all the results there, are special cases of our convergence analysis in this paper.…”
mentioning
confidence: 97%
“…While this technique cannot be used directly in distributed optimization due to the non-sparsity of the Hessian inverse, there exist ways of using second order information to approximate the Newton step for distributed settings. This has been done for consensus optimization problems reformulated as both the penalty-based methods [17], [24] and dual-based methods [20], as well as the more recent primal-dual methods [22], [25]. These approximate Newton methods exhibit faster convergence relative to their corresponding first order methods.…”
Section: Introductionmentioning
confidence: 99%
“…Theorem 1 Consider PD-QN as introduced in (23)- (24). Consider arbitrary constants β > 1 and φ > 1 and ζ as a positive constant.…”
mentioning
confidence: 99%