2013
DOI: 10.1109/tip.2013.2237919
|View full text |Cite
|
Sign up to set email alerts
|

Hessian Schatten-Norm Regularization for Linear Inverse Problems

Abstract: Abstract-We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular totalvariation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they av… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
192
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 155 publications
(193 citation statements)
references
References 46 publications
1
192
0
Order By: Relevance
“…We hypothesize that this is because the TV regularization is relatively less effective on the rat brain because it is more complex (less piecewise constant) than the other datasets and because (compared to the Shepp3D dataset) it has a small amount of measurement noise. We believe these results could be improved by taking measurements with a smaller pixel size (thereby making the reconstruction more piecewise constant) or by trying another regularizer, e.g., the Hessian Schatten-norm [25], wavelets [40] …”
Section: Phantom Dataset Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…We hypothesize that this is because the TV regularization is relatively less effective on the rat brain because it is more complex (less piecewise constant) than the other datasets and because (compared to the Shepp3D dataset) it has a small amount of measurement noise. We believe these results could be improved by taking measurements with a smaller pixel size (thereby making the reconstruction more piecewise constant) or by trying another regularizer, e.g., the Hessian Schatten-norm [25], wavelets [40] …”
Section: Phantom Dataset Comparisonmentioning
confidence: 99%
“…For example, total variation (TV) regularization applied in a slice-by-slice way promotes each slice to be piecewise constant, while TV applied in 3D promotes the reconstruction volume to be piecewise constant. Or, the Hessian Schatten-norm regularization [25] can be used to penalize second-order derivatives. This should enable the 3D reconstruction to achieve a more favorable trade-off between dose reduction and reconstruction quality.…”
Section: Introductionmentioning
confidence: 99%
“…where · S ∞ is the ∞ -norm of the singular values of the corresponding matrix (for more details, we refer the reader to [11]). …”
Section: Image Reconstructionmentioning
confidence: 99%
“…For our regularization scheme λ max (RR T ) ≤ γ where γ = 8 for the TV regularization for two-dimensional problems, and its value is 64 for the HS regularization as computed in [11].…”
Section: Parameter Settingmentioning
confidence: 99%
“…High order variational models thus can be applied to remedy these side effects. Among these is the second order total variation (SOTV) model [1,2,22,26,27]. Unlike the high order variational models, such as the Gaussian curvature [28], mean curvature [23,29], Euler's elastica [21] etc., the SOTV is a convex high order extension of the FOTV, which guarantees a global solution.…”
Section: Introductionmentioning
confidence: 99%