2018
DOI: 10.1109/lsp.2018.2819892
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Completion via Generalized Tensor Tubal Rank Minimization Using General Unfolding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(26 citation statements)
references
References 27 publications
0
26
0
Order By: Relevance
“…The iGTKD approach proposed in this article, on the other hand, does not have such problems. Furthermore, the definition of the improved tensor multi-rank, which is not shown in [34], is clearly defined in this article. It is worthy to note that the CGTNN and ACGTNN approaches in [34] are minimising the improved tensor tubal-rank, while the tensor completion approach to be proposed in next section is in fact minimises the improved tensor multi-rank.…”
Section: Definition 262mentioning
confidence: 99%
See 2 more Smart Citations
“…The iGTKD approach proposed in this article, on the other hand, does not have such problems. Furthermore, the definition of the improved tensor multi-rank, which is not shown in [34], is clearly defined in this article. It is worthy to note that the CGTNN and ACGTNN approaches in [34] are minimising the improved tensor tubal-rank, while the tensor completion approach to be proposed in next section is in fact minimises the improved tensor multi-rank.…”
Section: Definition 262mentioning
confidence: 99%
“…The data X can also be sampled randomly under a predefined observation percentage to get the data tensor M Ω with missing elements, and the tensor recovery methods can be applied for the recovery of the original data as c X , Finally the recovered image can be calculated as b I . The peak SNR (PSNR) and structural similarity (SSIM) [49] Here we follow and modify the idea in [33,34] to set the patterns transferring the original 2-D M 1 � M 2 data to 3-D K � J � L structure under the following rules: The L refers to the most detail information, and is set to be a small value. While the K � J should be as square as possible or follow a similar shape of M 1 � M 2 .…”
Section: Sar Imagingmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, the low-tubal-rank model [16,25] has shown better performance than traditional tensor low-rank models in many tensor recover tasks such as image/video inpainting/denoising/ sensing [2,25,26], moving object detection [27], multi-view learning [28], seismic data completion [29], WiFi fingerprint [30], MRI imaging [16], point cloud data inpainting [31], and so on. The tubal rank is a new complexity measure of tensor defined through the framework of tensor singular value decomposition (t-SVD) [32,33]. At the core of existing low-tubal-rank models is the tubal nuclear norm (TNN) which is a convex surrogate for the tubal rank.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, a novel low-rank tensor model called the low-tubal-rank model was proposed [22,30]. The core of it is to model the 3D data as a tensor that has low tubal-rank [31], which is defined through a new tensor singular value decomposition (t-SVD) [1,32]. It has been successfully used in modeling multi-way real-world data, such as color images [6], videos [33], seismic data [34], WiFi fingerprint [35], MRI imaging [22], traffic volume data [36], etc.…”
Section: Introductionmentioning
confidence: 99%