Proceedings of the 5th Workshop on Irregular Applications: Architectures and Algorithms 2015
DOI: 10.1145/2833179.2833183
|View full text |Cite
|
Sign up to set email alerts
|

Tensor-matrix products with a compressed sparse tensor

Abstract: The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensormatrix products fall into two classes. The first class saves floating point operations by storing a compressed tensor for each dimension of the data. These methods are fast but suffer high memory costs. The second class u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
85
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 111 publications
(86 citation statements)
references
References 14 publications
1
85
0
Order By: Relevance
“…Equation 6 reformulates Equation 4 for a row of Y . Smith et al [22] further optimize the computation by factoring out C as shown in 8.…”
Section: Sparse Mttkrpmentioning
confidence: 99%
“…Equation 6 reformulates Equation 4 for a row of Y . Smith et al [22] further optimize the computation by factoring out C as shown in 8.…”
Section: Sparse Mttkrpmentioning
confidence: 99%
“…Unless the factors are very sparse, this product is totally dense due to high fill-in and can easily require many times more memory than the original tensor. Due to the performance characteristics of the MTTKRP, researchers have focused on designing efficient implementations with respect to both execution time and memory [11], [2].…”
Section: Tensor Decompositionmentioning
confidence: 99%
“…SPLATT 2 is an open source software toolbox for sparse tensor factorization and related kernels [11], [12], [13]. SPLATT includes routines for computing least-squares CP, as well as constrained CP and CP with missing values (i.e., tensor completion).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Choi and Vishwanathan addressed this problem by using a compressed representation of a tensor for each mode to reduce the amount of floating point operations, especially for sparse tensors. An efficient parallel sparse tensor‐matrix multiplication that uses different compressed representation was also proposed by Smith et al Recently, Chen et al proposed an improved version of Phan and Cichocki's parallel PARAFAC that better reflects the dynamics of large tensors, which can be implemented on a GPU cluster.…”
Section: Introductionmentioning
confidence: 99%