2013
DOI: 10.1016/j.laa.2011.12.002
|View full text |Cite
|
Sign up to set email alerts
|

Some convergence results on the Regularized Alternating Least-Squares method for tensor decomposition

Abstract: We study the convergence of the Regularized Alternating LeastSquares algorithm for tensor decompositions. As a main result, we have shown that given the existence of critical points of the Alternating Least-Squares method, the limit points of the converging subsequences of the RALS are the critical points of the least squares cost functional. Some numerical examples indicate a faster convergence rate for the RALS in comparison to the usual Alternating LeastSquares method.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
63
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(64 citation statements)
references
References 34 publications
1
63
0
Order By: Relevance
“…Even so, we do not know which columns are activated. So, we can use the standard trilinear decomposition algorithm, such as alternating least squares (ALS) [22] and regularized alternating least squares (RALS) [23], [24], where some sufficient conditions for uniqueness up to permutation and scalings of the decomposition are provided. After the decomposition, we can obtain A, B, and S (f ).…”
Section: A Algorithm Based On Trilinear Decompositionmentioning
confidence: 99%
“…Even so, we do not know which columns are activated. So, we can use the standard trilinear decomposition algorithm, such as alternating least squares (ALS) [22] and regularized alternating least squares (RALS) [23], [24], where some sufficient conditions for uniqueness up to permutation and scalings of the decomposition are provided. After the decomposition, we can obtain A, B, and S (f ).…”
Section: A Algorithm Based On Trilinear Decompositionmentioning
confidence: 99%
“…The assumption A2 corresponds to the persistency of excitation from the data and is a key ingredient in the convergence. When using ALS for a system identification problem and assuming A2 enables to avoid rank-deficiencies in the matrix F and therefore swamps as observed for tensor decomposition in [35] do not occur.…”
Section: B Convergence Proof For the Normalized Alsmentioning
confidence: 99%
“…13,[17][18][19][20][21][22][23][24][25][26] Regularized ALS scheme was considered in other works. 27,28 Recovering a tensor using only few of its entries is called tensor completion. Tensor completion has been widely studied in the literature.…”
Section: Introductionmentioning
confidence: 99%