2020
DOI: 10.1007/s10107-020-01530-0
|View full text |Cite
|
Sign up to set email alerts
|

On the linear convergence rates of exchange and continuous methods for total variation minimization

Abstract: We analyze an exchange algorithm for the numerical solution total-variation regularized inverse problems over the space M(Ω) of Radon measures on a subset Ω of R d. Our main result states that under some regularity conditions, the method eventually converges linearly. Additionally, we prove that continuously optimizing the amplitudes of positions of the target measure will succeed at a linear rate with a good initialization. Finally, we propose to combine the two approaches into an alternating method and discu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(31 citation statements)
references
References 27 publications
0
31
0
Order By: Relevance
“…After this manuscript was finalized we were made aware of [28], where the authors prove linear convergence of a similar accelerated conditional gradient method for the particular case of H = C = R and G( u M ) = β u M . We note that [28] and the present manuscript were derived independently of each other and differ in certain important aspects. In particular, in contrast to our work, the authors require A k ⊂ A k+1 , i.e.…”
Section: Existing Convergence Results For Conditional Gradient Methodsmentioning
confidence: 99%
“…After this manuscript was finalized we were made aware of [28], where the authors prove linear convergence of a similar accelerated conditional gradient method for the particular case of H = C = R and G( u M ) = β u M . We note that [28] and the present manuscript were derived independently of each other and differ in certain important aspects. In particular, in contrast to our work, the authors require A k ⊂ A k+1 , i.e.…”
Section: Existing Convergence Results For Conditional Gradient Methodsmentioning
confidence: 99%
“…When Θ is continuous, the one dimensional case can sometimes be dealt with specific algorithms [13,15]. In higher dimensions, the classical algorithms are conditional gradient algorithms (also known as Franck-Wolfe) [11,25,8], moment methods [24,14,27] and adaptive sampling/exchange algorithms [30,29]. Often, these algorithms are complemented with non-convex updates on the particle positions, which considerably improves their behavior.…”
Section: Related Workmentioning
confidence: 99%
“…Given an initial condition that is close to the optimum and with the same structure (i.e. without over-parameterization), the local convergence for non-convex gradient descent is studied in [56,29]. Wasserstein gradient flows for optimization.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The conditional gradient method (CGM) has also proven to be applicable to address the BLasso problem [29] and further enhanced with nonconvex local optimization extra steps [33][34][35]. Interestingly, the CGM has been shown to be equivalent to the so-called exchange method in [36,37]. More recently, gradient-flow methods on spaces of measures have also been investigated to address the BLasso problem [38,39].…”
Section: Related Work and State Of The Artmentioning
confidence: 99%