2018 IEEE 34th International Conference on Data Engineering (ICDE) 2018
DOI: 10.1109/icde.2018.00104
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries

Abstract: Given sparse multi-dimensional data (e.g., (user, movie, time; rating) for movie recommendations), how can we discover latent concepts/relations and predict missing values? Tucker factorization has been widely used to solve such problems with multi-dimensional data, which are modeled as tensors. However, most Tucker factorization algorithms regard and estimate missing entries as zeros, which triggers a highly inaccurate decomposition. Moreover, few methods focusing on an accuracy exhibit limited scalability … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(41 citation statements)
references
References 40 publications
(82 reference statements)
0
41
0
Order By: Relevance
“…Recently, several scalable algorithms were proposed in [116]- [118] for computation of the HOSVD. These algorithms inherently do not have a randomized structure and basically they exploit the idea of on-the-fly-computation and parallel row-wise update rule roles to avoid the intermediate data explosion problem.…”
Section: Discussion On Further Challengesmentioning
confidence: 99%
“…Recently, several scalable algorithms were proposed in [116]- [118] for computation of the HOSVD. These algorithms inherently do not have a randomized structure and basically they exploit the idea of on-the-fly-computation and parallel row-wise update rule roles to avoid the intermediate data explosion problem.…”
Section: Discussion On Further Challengesmentioning
confidence: 99%
“…For example, a high-order tensor usually has enormous elements, which require high computing capability to process data. Efficient tensor decomposition algorithms have been recently proposed in [27][28][29], which can achieve up to 14 times faster tensor decomposition. How to further accelerate tensor decomposition and reduce tensor data size as far as possible is still an open problem, which deserves significant efforts in future work.…”
Section: Discussionmentioning
confidence: 99%
“…• Gtensor uses a row-wise update rule [10,11] Thanks to its careful design, Gtensor outperforms existing tensor analysis library in terms of running time and accuracy, as we show in Section 2.1.2.…”
Section: System Overviewmentioning
confidence: 90%
“…• Tensor decomposition: PARAFAC, nonnegative PARAFAC, Tucker [5,10,11], nonnegative Tucker, and CMTF (Coupled Matrix-Tensor Factorization) [3,6]. • Tensor generation: generating 1) tensors filled with one or random values, 2) tensors from given factor matrices and a core tensor, and 3) sparse R-MAT and Kronecker tensors.…”
Section: Concept Discovery Trend Analysismentioning
confidence: 99%