2021
DOI: 10.48550/arxiv.2108.12163
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Provable Tensor-Train Format Tensor Completion by Riemannian Optimization

Abstract: The tensor train (TT) format enjoys appealing advantages in handling structural high-order tensors. The recent decade has witnessed the wide applications of TT-format tensors from diverse disciplines, among which tensor completion has drawn considerable attention. Numerous fast algorithms, including the Riemannian gradient descent (RGrad) algorithm, have been proposed for the TT-format tensor completion. However, the theoretical guarantees of these algorithms are largely missing or sub-optimal, partly due to t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 70 publications
0
5
0
Order By: Relevance
“…Recently, it is discovered that optimizing (2) directly on the low-rank manifold M r , called Riemannian, enjoys the fast computational speed of factorization-based approaches and, meanwhile, converges linearly regardless of the condition number of M * . See, e.g., Cai et al (2021a) and Cai et al (2021b), for the convergence of Riemannian gradient descent (RGrad) algorithms in minimizing strongly convex and smooth functions with tensor-related applications. We note that RGrad is similar to the projected gradient descent (PGD, Chen and Wainwright (2015)) except that RGrad utilizes the Riemannian gradient while PGD takes the vanilla one.…”
Section: Robust Loss and Riemannian Sub-gradient Descentmentioning
confidence: 99%
“…Recently, it is discovered that optimizing (2) directly on the low-rank manifold M r , called Riemannian, enjoys the fast computational speed of factorization-based approaches and, meanwhile, converges linearly regardless of the condition number of M * . See, e.g., Cai et al (2021a) and Cai et al (2021b), for the convergence of Riemannian gradient descent (RGrad) algorithms in minimizing strongly convex and smooth functions with tensor-related applications. We note that RGrad is similar to the projected gradient descent (PGD, Chen and Wainwright (2015)) except that RGrad utilizes the Riemannian gradient while PGD takes the vanilla one.…”
Section: Robust Loss and Riemannian Sub-gradient Descentmentioning
confidence: 99%
“…A recent paper [34] addresses Riemannian TT completion from the theoretical point of view. There, the authors use a fixed step size and apply an additional trimming procedure before TT-SVD: it ensures that all the elements of the tensor before the retraction do not exceed a certain threshold and that the projected tensor is incoherent.…”
Section: Tensor Completionmentioning
confidence: 99%
“…We leave aside the question of generating an initial estimate that lies close enough to the true solution and focus instead on reducing the required number of samples. On the contrary, in [34] the main concern is in enlarging the basin of attraction and providing a constructive initialization procedure.…”
Section: Our Aim and Outline Of The Papermentioning
confidence: 99%
See 2 more Smart Citations