The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2007
DOI: 10.1109/tip.2007.896622
|View full text |Cite
|
Sign up to set email alerts
|

The Equivalence of Half-Quadratic Minimization and the Gradient Linearization Iteration

Abstract: A popular way to restore images comprising edges is to minimize a cost-function combining a quadratic data-fidelity term and an edge-preserving (possibly nonconvex) regularization term. Mainly because of the latter term, the calculation of the solution is slow and cumbersome. Half-quadratic (HQ) minimization (multiplicative form) was pioneered by Geman and Reynolds (1992) in order to alleviate the computational task in the context of image reconstruction with non-convex regularization. By promoting the idea of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
72
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 112 publications
(76 citation statements)
references
References 36 publications
0
72
0
Order By: Relevance
“…In this section, we review a class of such functionals described in [26], [29] and defined in our framework as…”
Section: B Nonquadratic Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we review a class of such functionals described in [26], [29] and defined in our framework as…”
Section: B Nonquadratic Regularizationmentioning
confidence: 99%
“…Consequently, the basic steepest-descent method applied to the given cost functional is equivalent to the corresponding gradient flow. Similarly, the iteratively reweighted leastsquares (IRLS) technique [25] that is used for nonquadratic regularization is associated with linearized versions of the gradient of the original functional [21], [26]. This provides an interpretation that relates IRLS to lagged-diffusivity fixedpoint iterations.…”
mentioning
confidence: 99%
“…This method has on the one hand the advantage of being very easy to implement, and on the other hand the disadvantage of being quite slow. To improve the convergence speed, quasi-Newton methods have been proposed [1,23,29,36,55,56,67]. Iterative methods have proved successful [16,18,35].…”
Section: Introductionmentioning
confidence: 99%
“…Yet, the practical efficiency of the overall algorithm hinges on the choices of the scaling matrices H (k,k) θ and H (k+1,k) z and the subroutines for solving (13) and (15). In this work, our choices for H (k,k) θ and H (k+1,k) z (see formulas (18) and (21) below) will be structured Hessian approximations motivated from the iteratively reweighted least-squares (IRLS) method [13,14].…”
mentioning
confidence: 99%
“…In this work, our choices for H (k,k) θ and H (k+1,k) z (see formulas (18) and (21) below) will be structured Hessian approximations motivated from the iteratively reweighted least-squares (IRLS) method [13,14]. The solution of the resulting subproblem can be interpreted as a regularized Newton step; see [15,16,17].…”
mentioning
confidence: 99%