2020
DOI: 10.1109/tcsvt.2019.2901311
|View full text |Cite
|
Sign up to set email alerts
|

Low CP Rank and Tucker Rank Tensor Completion for Estimating Missing Components in Image Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(30 citation statements)
references
References 57 publications
0
23
0
Order By: Relevance
“…In this section, we compare the proposed method to other state-of-the-art methods on three visual-data sets: (i) A hyperspectral image (HSI) 1 of size 200 × 200 × 80, which records the area of urban landscape; (ii) The Train-video 2 which consists of 80 color frames of size 72 × 128 × 3, presented by a tensor of size 72×128×3×80; (iii) The AT&T ORL 3 face data set which consists of 10 different images of size 32 × 32 for each of 40 distinct subjects, presented by a tensor of size 32 × 32 × 10 × 40. Since reshaping the visual data into high-order tensor significantly improve the performance of the TT/TR-based methods (i.e.…”
Section: Visual Data Inpaintingmentioning
confidence: 99%
“…In this section, we compare the proposed method to other state-of-the-art methods on three visual-data sets: (i) A hyperspectral image (HSI) 1 of size 200 × 200 × 80, which records the area of urban landscape; (ii) The Train-video 2 which consists of 80 color frames of size 72 × 128 × 3, presented by a tensor of size 72×128×3×80; (iii) The AT&T ORL 3 face data set which consists of 10 different images of size 32 × 32 for each of 40 distinct subjects, presented by a tensor of size 32 × 32 × 10 × 40. Since reshaping the visual data into high-order tensor significantly improve the performance of the TT/TR-based methods (i.e.…”
Section: Visual Data Inpaintingmentioning
confidence: 99%
“…ADMM and APG are two most popular first-order approaches for solving problem (3). ADMM separates the objective function via additionally introduced variables, which may results in tedious parameter setting and slow convergence.…”
Section: B General Solution With Apgmentioning
confidence: 99%
“…On that basis, finding a low-rank solution to an optimization problem, e.g., matrix completion (MC) or subspace clustering (SC), has attracted a great deal of attention over the last decade. Concrete applications, where low-rank modeling of X is relevant, can be found in scene reconstruction [2], video inpainting [3], background subtraction [4], or video matting [5], among many others.…”
Section: Introductionmentioning
confidence: 99%
“…For example, CANDECOMP/PARAFAC (CP) rank [6,32,63] minimizes the sparsity of tensors over bases of rank-1 outer products; Tucker rank [31,45,48] focuses on the low-rankness of unfolding matrices along different modes; tubal rank [15,25,30,60] promotes the tubal sparsity under the tensor singular value decomposition (t-SVD), by treating third-order tensors as linear operators on matrices; tensor train (TT) rank [2,38] and its extension tensor ring (TR) rank [13,21] capture the global correlation among tensor entries using matrix product states. Considering that each type of tensor rank encodes a specific correlated data structure, recent studies attempt to integrate the insights delivered by different low-rank tensor formats, such as joint CP rank and Tucker rank minimization [33], weighted low-rank tensor recovery (WLRTR) [10], and Kronecker-basis-representation (KBR)based tensor low-rankness measure [54].…”
Section: Introductionmentioning
confidence: 99%