2009
DOI: 10.1109/tip.2008.2008420
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Minimization Method for a Generalized Total Variation Functional

Abstract: Replacing the l(2) data fidelity term of the standard Total Variation (TV) functional with an l(1) data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this l(1)-TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both l(2)-TV and l(1)-TV as special … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
139
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 210 publications
(141 citation statements)
references
References 44 publications
2
139
0
Order By: Relevance
“…19. We utilize SPG for the optimization of energy functions since we have previously experienced excellent performance of SPG on restoration of images degraded by Gaussian noise.…”
Section: Optimizationmentioning
confidence: 99%
“…19. We utilize SPG for the optimization of energy functions since we have previously experienced excellent performance of SPG on restoration of images degraded by Gaussian noise.…”
Section: Optimizationmentioning
confidence: 99%
“…To enhance the reconstruction efficiency, the graph cuts based methods were organized to conserve computation time due to the gradient-free optimization. 15,16 Although the reconstruction approaches usually work well in some specific and highly controlled situations, no efforts need to be spared in investigating more general cases. 17 In this paper, a parallel iterative shrinkage algorithm has been demonstrated for the dual-modality tomography, i.e., the hybrid optical/μCT imaging.…”
Section: Dmt Not Only Obtains the Functional Changes Ofmentioning
confidence: 99%
“…The penalty term is the composition of a linear operator and the 1 norm. Although the 1 norm stands out as the convex penalty that most effectively induces sparsity [27], non-convex penalties can lead to more accurate estimation of the underlying signal [37], [38], [40], [44], [50].…”
Section: Introductionmentioning
confidence: 99%