2016
DOI: 10.1007/s10851-016-0692-2
|View full text |Cite
|
Sign up to set email alerts
|

Acceleration of the PDHGM on Partially Strongly Convex Functions

Abstract: We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, O(1/N 2 ) with respect to initialisation and O(1/N ) with respect to the dual sequence, and the residual part of the primal sequence. We demonstrate the efficacy of the proposed methods on image processing problems lacking strong convexity, such as total generalised variation … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
19
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 18 publications
(19 citation statements)
references
References 32 publications
0
19
0
Order By: Relevance
“…Several first-order optimisation methods have been developed for (P 0 ) without blockseparable structure, typically for both G and F convex and K linear. Recently also some non-convexity and non-linearity has been introduced [4,21,23,37]. In applications in image processing and data science, one of G or F is typically non-smooth.…”
mentioning
confidence: 99%
See 3 more Smart Citations
“…Several first-order optimisation methods have been developed for (P 0 ) without blockseparable structure, typically for both G and F convex and K linear. Recently also some non-convexity and non-linearity has been introduced [4,21,23,37]. In applications in image processing and data science, one of G or F is typically non-smooth.…”
mentioning
confidence: 99%
“…Our present paper is based on [37] on the acceleration of the PDHGM when G is strongly convex only on a subspace: the deterministic two-block case m = 2 and n = 1 of (GF). Besides enabling (doubly-)stochastic updates and an arbitrary number of both primal and dual blocks, in the present work, we derive simplified step length rules through a more careful analysis.…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…The appropriate choice for µ > 0 is related to the constant of strong convexity of F * , and in the convex case yields the optimal convergence rate of O(1/k 2 ) for the functional values rather than the rate O(1/k) for the original version; see [3,4,21]. A similar acceleration is possible if G is strongly convex by swapping the roles of σ i and τ i in line 4; we will refer to both variants as Algorithm 2 in the following.…”
mentioning
confidence: 99%