2017
DOI: 10.1137/16m1059011
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Parallel Algorithm with an $O(1/t)$ Convergence Rate for General Convex Programs

Abstract: Abstract. This paper considers convex programs with a general (possibly non-differentiable) convex objective function and Lipschitz continuous convex inequality constraint functions. A simple algorithm is developed and achieves an O(1/t) convergence rate. Similar to the classical dual subgradient algorithm and the ADMM algorithm, the new algorithm has a parallel implementation when the objective and constraint functions are separable. However, the new algorithm has a faster O(1/t) convergence rate compared wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
55
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 51 publications
(55 citation statements)
references
References 22 publications
0
55
0
Order By: Relevance
“…To achieve an ε-optimal solution, compared to our results, their iteration complexity is O(ε −1 ) times worse for the convex problems and O(ε − 1 2 ) worse for the strongly convex problems. Assuming Lipschitz continuity of ∇f i for every i ∈ [m], [39] proposes a new primal-dual type algorithm for nonlinearly constrained convex programs. Every iteration, it minimizes a proximal Lagrangian function and updates the multiplier in a novel way.…”
Section: General Convex Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…To achieve an ε-optimal solution, compared to our results, their iteration complexity is O(ε −1 ) times worse for the convex problems and O(ε − 1 2 ) worse for the strongly convex problems. Assuming Lipschitz continuity of ∇f i for every i ∈ [m], [39] proposes a new primal-dual type algorithm for nonlinearly constrained convex programs. Every iteration, it minimizes a proximal Lagrangian function and updates the multiplier in a novel way.…”
Section: General Convex Problemsmentioning
confidence: 99%
“…With sufficiently large proximal parameter that depends on the Lipschitz constants of f i 's, the algorithm converges in O(1/k) ergodic rate. The follow-up paper [38] focuses on smooth constrained convex problems and proposes a linearized variant of the algorithm in [39]. Assuming compactness of the set X , it also establishes O(1/k) ergodic convergence of the linearized method.…”
Section: General Convex Problemsmentioning
confidence: 99%
“…The conventional primal-dual subgradient method, also known as the Arrow-Hurwicz-Uzawa subgradient method, is a low complexity algorithm with an O(1/ 2 ) convergence time. Recently, a new Lagrangian dual type algorithm with a faster O(1/ ) convergence time is proposed in Yu and Neely (2017). However, if the objective or constraint functions are not separable, each iteration of the Lagrangian dual type method in Yu and Neely (2017) requires to solve a unconstrained convex program, which can have huge complexity.…”
mentioning
confidence: 99%
“…It remains to bound the first term. Since (d 0 , Π) ∈ G, by Lemma 5.2, the corresponding state-action probabilities {θ (k) } K k=1 of Π satisfies K k=1 E(g i,t ), θ (k) ≤ C 1 KΨ/T and {θ (k) } K k=1 is feasible for (32)- (33). Since {θ…”
Section: A Relaxed Constraint Setsmentioning
confidence: 99%
“…It is easy to see that if q : X → R is convex, c > 0 and b ∈ R n , the function q(x) + c 2 x − b 2 2 is c-strongly convex. Furthermore, if the function h is c-strongly convex that is minimized at a point x min ∈ X , then (see, e.g., Corollary 1 in [33]):…”
Section: Sample-path Analysismentioning
confidence: 99%