2020
DOI: 10.1137/19m1261043
|View full text |Cite
|
Sign up to set email alerts
|

Randomized Algorithms for Low-Rank Tensor Decompositions in the Tucker Format

Abstract: Many applications in data science and scientific computing involve large-scale datasets that are expensive to store and compute with, but can be efficiently compressed and stored in an appropriate tensor format. In recent years, randomized matrix methods have been used to efficiently and accurately compute low-rank matrix decompositions. Motivated by this success, we focus on developing randomized algorithms for tensor decompositions in the Tucker representation. Specifically, we present randomized versions of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
59
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 68 publications
(59 citation statements)
references
References 46 publications
0
59
0
Order By: Relevance
“…This method compresses the target tensor after extracting each factor matrix. The resulting algorithm can be accelerated using randomized matrix approximations [27] but seems to require N passes over the tensor. Hence the method is difficult to implement when the data is too large to store locally.…”
Section: St-hosvdmentioning
confidence: 99%
See 3 more Smart Citations
“…This method compresses the target tensor after extracting each factor matrix. The resulting algorithm can be accelerated using randomized matrix approximations [27] but seems to require N passes over the tensor. Hence the method is difficult to implement when the data is too large to store locally.…”
Section: St-hosvdmentioning
confidence: 99%
“…In the first step of the HOSVD, Algorithm 2.1, we approximately compute the top r n eigenvectors of each matricization X pnq using the randomized SVD [16]. Indeed, the same idea was proposed in concurrent work [27], which extends the idea to the ST-HOSVD and provides an error analysis. The error analyses of the two papers essentially coincide for Algorithm 4.2.…”
Section: 2mentioning
confidence: 99%
See 2 more Smart Citations
“…Step 1 of Algorithm 1 corresponds to the HOSVD of the dataset tensor and presents computational complexity given by [ 33 ]: where, for simplicity of notation, corresponds to the number of dataset instances M .…”
Section: Simulation Resultsmentioning
confidence: 99%