2019
DOI: 10.1553/etna_vol51s15
|View full text |Cite
|
Sign up to set email alerts
|

Block-proximal methods with spatially adapted acceleration

Abstract: We study and develop (stochastic) primal-dual block-coordinate descent methods for convex problems based on the method due to Chambolle and Pock. Our methods have known convergence rates for the iterates and the ergodic gap of O(1/N 2 ) if each block is strongly convex, O(1/N ) if no convexity is present, and more generally a mixed rate O(1/N 2 ) + O(1/N ) for strongly convex blocks if only some blocks are strongly convex. Additional novelties of our methods include blockwise-adapted step lengths and accelerat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 35 publications
0
6
0
Order By: Relevance
“…It is related to forward-backward splitting [29] and averaged gradient descent [9,19] and therefore suffers the same memory issues as the averaged gradient descent. Moreover, Valkonen proposed a stochastic primal-dual algorithm that can exploit partial strong convexity of the saddle point functional [47]. Randomized versions of the alternating direction method of multipliers are discussed for instance in [52,24].…”
mentioning
confidence: 99%
“…It is related to forward-backward splitting [29] and averaged gradient descent [9,19] and therefore suffers the same memory issues as the averaged gradient descent. Moreover, Valkonen proposed a stochastic primal-dual algorithm that can exploit partial strong convexity of the saddle point functional [47]. Randomized versions of the alternating direction method of multipliers are discussed for instance in [52,24].…”
mentioning
confidence: 99%
“…RIPGN is a Gauss-Newton variant; it linearizes the nonlinear operator 𝐼(𝜎) of ( 6) at each iterate, finds an approximate solution to the associated proximal problem using a socalled block-adapted version of primal dual proximal splitting (PDPS), and interpolates between this solution and the one computed at the previous iteration step. The PDPS algorithm was originally introduced by Chambolle and Pock (2011) and the block-adapted version was later introduced by Valkonen (2019). The RIPGN algorithm supports 𝐿 𝑝 , 𝑝 ≥ 1, functionals for the nonlinear data term and any proper, convex, and lower semicontinuous (i.e., even indicators) regularization functionals.…”
Section: Image Reconstructionmentioning
confidence: 99%
“…We now describe the RIPGN algorithm (Jauhiainen et al, 2020) specialized to our ERT problem (6). Presented in Algorithm 2, it utilizes the block-adapted PDPS of Valkonen (2019), presented in Algorithm 1, to solve the linearized inner problems of (6),…”
Section: Appendix: Implementation Detailsmentioning
confidence: 99%
“…Remark 2 For bilinear K, (16) is the "diagonally preconditioned" method of [13], or an unaccelerated non-stochastic variant of the methods in [40]. For K affine in y, (16) differs from the methods in [32] by placing the over-relaxation in the dual step outside K, compare Remark 1.…”
Section: Block-adapted Pdps For K Affine In Ymentioning
confidence: 99%
“…This approach allows us in Section 4 to significantly simplify and better explain the original proofs and conditions of [12,37,17,18,32]. Without additional effort, they also allow us to present block-adapted methods like those in [41,40,32]. The three main ingredients that ensure convergence are: (i) A three-point identity, satisfied by Bregman divergences (Section 4.1), (ii) Ellipticity of the algorithm-defining Bregman divergences (Sections 4.2 and 4.3), and (iii) A non-smooth second-order growth condition (Sections 4.4 and 4.5).…”
Section: Introductionmentioning
confidence: 99%