2020
DOI: 10.1007/s10444-020-09816-9
|View full text |Cite
|
Sign up to set email alerts
|

Fast randomized matrix and tensor interpolative decomposition using CountSketch

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…Remark As mentioned in Section 1, TensorSketch can be regarded as a generalization of the well‐known CountSketch 32,33 and has been wildly used for devising the algorithms for problems with Kronecker product or Khatri‐Rao product structure, 33,34 which usually appear in the computations on tensor decompositions 26,34,35 . However, unlike the definition of TensorSketch in these works, where the big‐endian convention is adopted, in Definition 9, we employ the little‐endian convention to compute Hfalse(ifalse)=Hfalse(truei1i2iNfalse)0.3em0.3emand0.3em0.3emnormalΞfalse(ifalse)=normalΞfalse(truei1i2iNfalse).$$ H(i)=H\left(\overline{i_1{i}_2\cdots {i}_N}\right)\kern0.60em \mathrm{and}\kern0.60em \Xi (i)=\Xi \left(\overline{i_1{i}_2\cdots {i}_N}\right).…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation
“…Remark As mentioned in Section 1, TensorSketch can be regarded as a generalization of the well‐known CountSketch 32,33 and has been wildly used for devising the algorithms for problems with Kronecker product or Khatri‐Rao product structure, 33,34 which usually appear in the computations on tensor decompositions 26,34,35 . However, unlike the definition of TensorSketch in these works, where the big‐endian convention is adopted, in Definition 9, we employ the little‐endian convention to compute Hfalse(ifalse)=Hfalse(truei1i2iNfalse)0.3em0.3emand0.3em0.3emnormalΞfalse(ifalse)=normalΞfalse(truei1i2iNfalse).$$ H(i)=H\left(\overline{i_1{i}_2\cdots {i}_N}\right)\kern0.60em \mathrm{and}\kern0.60em \Xi (i)=\Xi \left(\overline{i_1{i}_2\cdots {i}_N}\right).…”
Section: Preliminariesmentioning
confidence: 99%
“…With the slices‐Hadamard product, we have the following formula, which can avoid forming the TensorSketch and implementing the matrix multiplication between large matrices when applying TensorSketch to bold-italicGfalse[2false]n$$ {\boldsymbol{G}}_{\left[2\right]}^{\ne n} $$. Its proof is inspired by the works in References 32,34,35.…”
Section: Some New Findingsmentioning
confidence: 99%
“…The count-sketch technique is used in [109] for tensor interpolative decomposition [110] and also in [111] for the CP decomposition. The concept of Higher-order Count Sketch is developed in [39] for higher order tensors to fully exploit the multidimensional structure of the data tensor.…”
Section: Randomized Count-sketch Tucker Decompositionmentioning
confidence: 99%
“…Their work aimed to speed up the traditional ALS algorithm via randomized least-squares regressions. With respect to Tucker decomposition, Malik et al proposed two randomized algorithms using TensorSketch for low-rank tensor decomposition [48]. Che et al designed an effective randomized algorithm for computing the low-rank approximation of tensors under the sequentially truncated HOSVD (ST-HOSVD) model [49].…”
Section: Introductionmentioning
confidence: 99%