2015
DOI: 10.1007/s10107-015-0957-3
|View full text |Cite
|
Sign up to set email alerts
|

On the ergodic convergence rates of a first-order primal–dual algorithm

Abstract: We revisit the proofs of convergence for a first order primal-dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. The new results can deal with explicit terms and nonlinear proximity operators in spaces with quite general norms. MSC Classification: 49M29 65K10 65Y20 90C25

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

10
468
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 371 publications
(479 citation statements)
references
References 27 publications
10
468
1
Order By: Relevance
“…Since the primal-dual algorithm with Bregman proximity functions from [9] provides us with a flexible tool, for n to 0 :…”
Section: Derivative Of Primal-dual Splittingmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the primal-dual algorithm with Bregman proximity functions from [9] provides us with a flexible tool, for n to 0 :…”
Section: Derivative Of Primal-dual Splittingmentioning
confidence: 99%
“…The saddle-point problem (23) can be solved using the ergodic primal-dual algorithm [9], which leads to an iterative algorithm with totally differentiable iterations. The primal update in (13) is discussed in Example 6 and the dual update of (13) is essentially Example 7.…”
Section: Modelmentioning
confidence: 99%
“…In addition, Moreau-Yosida regularization results in a strongly convex functional, which can be exploited for accelerating Algorithm 1 as in [4] via adaptive step length and extrapolation parameters. This leads to the following iteration.…”
mentioning
confidence: 99%
“…The appropriate choice for µ > 0 is related to the constant of strong convexity of F * , and in the convex case yields the optimal convergence rate of O(1/k 2 ) for the functional values rather than the rate O(1/k) for the original version; see [3,4,21]. A similar acceleration is possible if G is strongly convex by swapping the roles of σ i and τ i in line 4; we will refer to both variants as Algorithm 2 in the following.…”
mentioning
confidence: 99%
“…Such problems arise for instance in the computation of the proximity operators of sums of simple functions, for which in some setting (as we illustrate in an experimental section) it might be beneficial to perform such a splitting which decomposes the problem into tiny parallel subproblems, rather than tackle the global problem by an accelerated descent or primal-dual algorithm such as [19,3,9,10].…”
Section: Introductionmentioning
confidence: 99%