2018
DOI: 10.48550/arxiv.1809.06591
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhanced 3DTV Regularization and Its Applications on Hyper-spectral Image Denoising and Compressed Sensing

Jiangjun Peng,
Qi Xie,
Qian Zhao
et al.

Abstract: The 3-D total variation (3DTV) is a powerful regularization term, which encodes the local smoothness prior structure underlying a hyper-spectral image (HSI), for general HSI processing tasks. This term is calculated by assuming identical and independent sparsity structures on all bands of gradient maps calculated along spatial and spectral HSI modes. This, however, is always largely deviated from the real cases, where the gradient maps are generally with different while correlated sparsity structures across al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…In addition, the runtime (seconds) is also considered as the index to compute complexity. Especially, five state-of-the-art methods are employed as the baselines to conduct performance comparison, including deep-learningbased HySuDeep (Subspace representation and deep CNN Image prior) [15], tensor-decomposition-based LRTDGS (Weighted group sparsity-regularized low-rank tensor decomposition) [24], matrix-factorization-based E-3DTV (Enhanced 3-D total variation) [36], SNLRSF (Subspace-based nonlocal low-Rank and sparse factorization method) [33] and F-LRNMF (Framelet-regularized low-rank nonnegative matrix factorization method) [34].…”
Section: Experimental Simulations and Performance Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the runtime (seconds) is also considered as the index to compute complexity. Especially, five state-of-the-art methods are employed as the baselines to conduct performance comparison, including deep-learningbased HySuDeep (Subspace representation and deep CNN Image prior) [15], tensor-decomposition-based LRTDGS (Weighted group sparsity-regularized low-rank tensor decomposition) [24], matrix-factorization-based E-3DTV (Enhanced 3-D total variation) [36], SNLRSF (Subspace-based nonlocal low-Rank and sparse factorization method) [33] and F-LRNMF (Framelet-regularized low-rank nonnegative matrix factorization method) [34].…”
Section: Experimental Simulations and Performance Analysismentioning
confidence: 99%
“…In addition, total variation regularization is also an efficient method to explore the spatial-spectral local smoothness of HSI [35]. For example, Peng et al [36] firstly introduced a enhanced 3-D total variation into HSI denoising model. Although spectral and abundance signatures have respective explicit physical meaning, reshaping hyperspectral datacube into matrix form will cause certain loss of structure information [37].…”
Section: Introductionmentioning
confidence: 99%
“…We perform two kinds of compressed measurements in the encoder stage to evaluate the efficiency of the proposed NGmeet reconstruction method. First, the same as in [16], [40], we assume that the original HSI is available, and the random permuted Hadamard transform operator is adopted to compress (decoder) the image. Second, as introduced in [51], [57], a compressive sensor is used to obtain a coded compressed image with a prior designed measurement operator, which is called compressive HSI imaging.…”
Section: Compressed Hsi Reconstructionmentioning
confidence: 99%
“…where h is the adjoint of h [40], [51]. The optimization of ( 19) can be efficiently solved by the preconditioned conjugate gradient method [40], [51].…”
Section: Compressed Hsi Reconstructionmentioning
confidence: 99%