2015 54th IEEE Conference on Decision and Control (CDC) 2015
DOI: 10.1109/cdc.2015.7402313
|View full text |Cite
|
Sign up to set email alerts
|

Randomized dual proximal gradient for large-scale distributed optimization

Abstract: In this paper we consider distributed optimization problems in which the cost function is separable (i.e., a sum of possibly non-smooth functions all sharing a common variable) and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose an asynchronous, distributed optimization algorithm over an undirected topology, based on a proximal gradient update on the dual problem. We show that by means of a proper choic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…The investigation of asynchronous dual algorithms for MPC is gaining more attention recently. In [26], for example, an asynchronous dual algorithm is proposed. Compared to [26], SVR-AMA allows the use of a generic (i.e., not necessarily uniform) probability distribution and, consequently, more flexibility in the tuning phase of the algorithm.…”
Section: B Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The investigation of asynchronous dual algorithms for MPC is gaining more attention recently. In [26], for example, an asynchronous dual algorithm is proposed. Compared to [26], SVR-AMA allows the use of a generic (i.e., not necessarily uniform) probability distribution and, consequently, more flexibility in the tuning phase of the algorithm.…”
Section: B Related Workmentioning
confidence: 99%
“…Compared to other asynchronous dual algorithms (see, e.g., [26]), Algorithm 5 allows one to tune and adapt (online) the probability distribution Π. This is particularly useful, for example, to give priority in the update to those subproblems whose associated dual variables vary the most between two iterations of the algorithm, as shown in the following section.…”
Section: Remarkmentioning
confidence: 99%
“…Furthermore, for each equality constraint in Problem (13), the corresponding Lagrange multipliers have been highlighted.…”
Section: Asynchronous Mpcmentioning
confidence: 99%
“…Compared to other asynchronous dual algorithms (e.g., [13]), Algorithm 3 allows one to tune and adapt (online) the probability distribution Π. This is particularly useful, for example, to give priority in the update to those subproblems whose associated dual variables vary the most between two iterations of the algorithm, as shown in the next section.…”
Section: If We Define Ymentioning
confidence: 99%
See 1 more Smart Citation