2015
DOI: 10.1137/130942401
|View full text |Cite
|
Sign up to set email alerts
|

Variants of Alternating Least Squares Tensor Completion in the Tensor Train Format

Abstract: We consider the problem of fitting a low rank tensor A ∈ R I , I = {1, . . . , n} d , to a given set of data points {M i ∈ R | i ∈ P }, P ⊂ I. The low rank format under consideration is the hierarchical or TT or MPS format. It is characterized by rank bounds r on certain matricizations of the tensor. The number of degrees of freedom is in O(r 2 dn). For a fixed rank and mode size n we observe that it is possible to reconstruct random (but rank structured) tensors as well as certain discretized multivariate (bu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
65
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 63 publications
(65 citation statements)
references
References 23 publications
0
65
0
Order By: Relevance
“…Schatten norm, or trace norm), which is defined as the sum of singular values of a matrix and it is the most popular convex surrogate for rank regularization. Based on differ-ent definitions of tensor rank, various nuclear norm regularized algorithms have been proposed (Liu et al 2013;Imaizumi, Maehara, and Hayashi 2017;Liu et al 2014;Liu et al 2015). Rank minimization based methods do not need to specify the rank of the employed tensor decompositions beforehand, and the rank of the recovered tensor will be automatically learned from the limited observations.…”
Section: Introductionmentioning
confidence: 99%
“…Schatten norm, or trace norm), which is defined as the sum of singular values of a matrix and it is the most popular convex surrogate for rank regularization. Based on differ-ent definitions of tensor rank, various nuclear norm regularized algorithms have been proposed (Liu et al 2013;Imaizumi, Maehara, and Hayashi 2017;Liu et al 2014;Liu et al 2015). Rank minimization based methods do not need to specify the rank of the employed tensor decompositions beforehand, and the rank of the recovered tensor will be automatically learned from the limited observations.…”
Section: Introductionmentioning
confidence: 99%
“…The most popular tensor network is the Tensor Train (TT) representation, which for a order-d tensor with each dimension of size n requires O(dnr 2 ) parameters, where r is the rank of each of the factors, and thus allows for the efficient data representation [10]. Tensor completion based on tensor train decompositions have been recently considered in [11], [12]. The authors of [11] considered the completion of data based on the alternating least square method.Although the TT format has been widely applied in numerical analysis, its applications to image classification and completion are rather limited [4], [11], [12].…”
mentioning
confidence: 99%
“…The essential adaption of the rank however, including similar matrix approaches, is rarely considered, all the less in numerical tests, and remains an open problem in this setting. A mentionable approach so far is the rank increasing strategy [14,44] and its regularization properties are a first starting point for this article.…”
Section: Relation To Other Matrix and Tensor Methodsmentioning
confidence: 99%