2009
DOI: 10.1137/080727749
|View full text |Cite
|
Sign up to set email alerts
|

Nested Iterative Algorithms for Convex Constrained Image Recovery Problems

Abstract: The objective of this paper is to develop methods for solving image recovery problems subject to constraints on the solution. More precisely, we will be interested in problems which can be formulated as the minimization over a closed convex constraint set of the sum of two convex functions f and g, where f may be non-smooth and g is differentiable with a Lipschitz-continuous gradient. To reach this goal, we derive two types of algorithms that combine forward-backward and Douglas-Rachford iterations. The weak c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0
2

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 70 publications
(75 citation statements)
references
References 43 publications
(87 reference statements)
0
73
0
2
Order By: Relevance
“…Some authors have studied the use of nested algorithms to solve (1), for practical imaging problems [35][36][37]. This approach consists in embedding an iterative algorithm as an inner loop inside each iteration of another iterative method.…”
Section: Relationship To Existing Optimization Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some authors have studied the use of nested algorithms to solve (1), for practical imaging problems [35][36][37]. This approach consists in embedding an iterative algorithm as an inner loop inside each iteration of another iterative method.…”
Section: Relationship To Existing Optimization Methodsmentioning
confidence: 99%
“…This implies that ST (z) = ST (Sz) = Sz, so that Sz is a fixed point of T ′ and fix(T ′ ) = . Conversely, by (35), z ∈ fix(T ′ ) ⇒ P T (z) = P z ⇒ T (z) ∈ zer(A). All together with assumption (ii), the conditions are met to apply Lemma 4.1 to the iteration (34), after a change of variables like in the proof of Lemma 4.2, so that (z…”
Section: Proof Of Theorem 32 For Algorithm 31mentioning
confidence: 99%
“…Seeking an estimate of α as a minimizer of F , it is possible to apply an algorithm as the one proposed in [46], where the proximity operator of the sum of two convex functions is derived from a Douglas Rachford Algorithm. We present such an algorithm with the "ISTA" framework in Algorithm 4, but it can be embedded in FISTA instead.…”
Section: Algorithms For Social Sparsitymentioning
confidence: 99%
“…Since p ′ is a quadratic polynomial with minimimum at − z 17 , we have only to show that p ′ − z 17 = 2 24 17 z 2 − 8ζ − 2x > 0. Plugging in (8) and (9) The sequence (t k ) k∈N generated by Newton's algorithm is such that there exists k 0 ∈ N \ {0} such that t k0 ≥ t − . Otherwise, (t k ) k∈N would be an increasing sequence which would necessarily converge to t − and there would exist k 1 ∈ N such that p is convex over [t k1 , +∞[.…”
Section: Notationmentioning
confidence: 99%
“…Note that this method appears mainly to be well-founded for denoising problems. When a linear degradation operator H is present, a better approach consists of adopting a variational framework [8,13] where one minimizes a data fidelity term…”
Section: Introductionmentioning
confidence: 99%