2020
DOI: 10.1109/tgrs.2019.2946050
|View full text |Cite
|
Sign up to set email alerts
|

Nonlocal Tensor-Ring Decomposition for Hyperspectral Image Denoising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 86 publications
(40 citation statements)
references
References 57 publications
0
33
0
Order By: Relevance
“…Solar radiation passes through the atmosphere to reach the earth's surface, and is reflected to the sky after interacting with the surface targets. After entering the atmosphere, it is collected by the optical system of the remote sensing imaging sensor, and then transmitted to the array of the imaging sensor to convert the light signal into an electrical signal [17]. A series of electronic processing forms a digital image, and the satellite's downlink data channel transmits the image to the ground application system.…”
Section: A Concept and Characteristics Of Remote Sensing Technologymentioning
confidence: 99%
“…Solar radiation passes through the atmosphere to reach the earth's surface, and is reflected to the sky after interacting with the surface targets. After entering the atmosphere, it is collected by the optical system of the remote sensing imaging sensor, and then transmitted to the array of the imaging sensor to convert the light signal into an electrical signal [17]. A series of electronic processing forms a digital image, and the satellite's downlink data channel transmits the image to the ground application system.…”
Section: A Concept and Characteristics Of Remote Sensing Technologymentioning
confidence: 99%
“…However, while both TRPCA and TNN can achieve appealing performance in image inpainting, they suffer from the per-iteration computational cost due to the necessity of singular value decomposition. To improve per-iteration efficiency and avoid out-of-memory errors, several low-rank tensor factorization schemes have been introduced to describe the HSI low-rank property, including Tucker [15,16], tensor train [17], and tensor ring [10,18], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately, unlike the matrix case, the characterization of tensor low-rankness is still an open problem; representative works on this topic can be considered as higher-order generalizations of matrix rank from different perspectives [26]. For example, CANDECOMP/PARAFAC (CP) rank [6,32,63] minimizes the sparsity of tensors over bases of rank-1 outer products; Tucker rank [31,45,48] focuses on the low-rankness of unfolding matrices along different modes; tubal rank [15,25,30,60] promotes the tubal sparsity under the tensor singular value decomposition (t-SVD), by treating third-order tensors as linear operators on matrices; tensor train (TT) rank [2,38] and its extension tensor ring (TR) rank [13,21] capture the global correlation among tensor entries using matrix product states. Considering that each type of tensor rank encodes a specific correlated data structure, recent studies attempt to integrate the insights delivered by different low-rank tensor formats, such as joint CP rank and Tucker rank minimization [33], weighted low-rank tensor recovery (WLRTR) [10], and Kronecker-basis-representation (KBR)based tensor low-rankness measure [54].…”
Section: Introductionmentioning
confidence: 99%
“…These fixed transforms, however, can not generalize well to complex features, due to the lack of flexibility in constructing the basis functions. To remedy this weakness, dictionary learning methods train redundant dictionaries consisting of signal-dependent atoms for better adaptivity to real-life images [27,28,65]; self-similarity-based methods collaboratively encode a group of similar image patches for an enhanced sparse representation [8,13,24]. Although the above handcrafted methods are of good interpretability and solid theoretical support, they generally suffer from several drawbacks such as limited representation ability and high computational cost.…”
Section: Introductionmentioning
confidence: 99%