2020
DOI: 10.1137/19m126476x
|View full text |Cite
|
Sign up to set email alerts
|

ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(32 citation statements)
references
References 78 publications
2
30
0
Order By: Relevance
“…This result is consistent with our discussions in Subsection III-A. For CP parameters penalized method (15) with g specified as the ridge penalty (16), there is only one case that is identified as divergence for small λ (λ = 0.001), which confirms that CP penalization is able to overcome the problem of degeneracy as in Subsection III-C. For the noisy situations, i.e., Cases 2a and 2b, though CP parameters penalized method (15) has some rare cases to be identified as divergence, it is significantly less than the numbers of divergent cases of the other two methods. Again, we note that the empirical rule of divergence identification is not perfect, and so the very small number of identified divergences does not necessarily contradict with our theory in Section III-C. Figure 2 shows an example that the least squares and the coefficient penalized methods provide divergent iteration sequences, while the results of CP parameters penalized methods are identified as convergence.…”
Section: Algorithm 1: Block Updating Algorithmsupporting
confidence: 90%
See 3 more Smart Citations
“…This result is consistent with our discussions in Subsection III-A. For CP parameters penalized method (15) with g specified as the ridge penalty (16), there is only one case that is identified as divergence for small λ (λ = 0.001), which confirms that CP penalization is able to overcome the problem of degeneracy as in Subsection III-C. For the noisy situations, i.e., Cases 2a and 2b, though CP parameters penalized method (15) has some rare cases to be identified as divergence, it is significantly less than the numbers of divergent cases of the other two methods. Again, we note that the empirical rule of divergence identification is not perfect, and so the very small number of identified divergences does not necessarily contradict with our theory in Section III-C. Figure 2 shows an example that the least squares and the coefficient penalized methods provide divergent iteration sequences, while the results of CP parameters penalized methods are identified as convergence.…”
Section: Algorithm 1: Block Updating Algorithmsupporting
confidence: 90%
“…For notational simplicity, when the tuning parameters λ and α are given, we respectively rewrite the least squares method (9), CP parameters penalized method (15) with g specified as the ridge penalty (16), and the coefficient tensor penalized method in (18) as…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…Deterministic algorithms for decomposing large-scale data tensors into the Tucker format are prohibitive and require high memory and computational complexity or only applicable for structured data tensors [15], [16]. It has been proved that randomized algorithms can tackle this difficulty by exploiting only a part of the data tensors with applications in tensor regression [17], tensor completion [18], [19] and deep learning [20]. Because of this property, they scale quite well to the tensor dimensions and orders.…”
Section: Introductionmentioning
confidence: 99%