2015 IEEE International Parallel and Distributed Processing Symposium 2015
DOI: 10.1109/ipdps.2015.27
|View full text |Cite
|
Sign up to set email alerts
|

SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication

Abstract: Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor facto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
174
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 182 publications
(174 citation statements)
references
References 14 publications
0
174
0
Order By: Relevance
“…e common default format for sparse tensors is coordinate (COO) format. New formats have been developed including compressed sparse ber (CSF) [57], balanced CSF (BCSF) [25], agged COO (F-COO) [45], and hierarchical coordinate (HiCOO) [42] for general sparse tensors, and mode-generic and -speci c formats for structured sparse tensors [5]. Our benchmark suite currently supports COO and HiCOO for general sparse tensors and their variants for semi-sparse tensors with dense dimension(s).…”
Section: Tensor Formats and Kernel Implementationsmentioning
confidence: 99%
See 1 more Smart Citation
“…e common default format for sparse tensors is coordinate (COO) format. New formats have been developed including compressed sparse ber (CSF) [57], balanced CSF (BCSF) [25], agged COO (F-COO) [45], and hierarchical coordinate (HiCOO) [42] for general sparse tensors, and mode-generic and -speci c formats for structured sparse tensors [5]. Our benchmark suite currently supports COO and HiCOO for general sparse tensors and their variants for semi-sparse tensors with dense dimension(s).…”
Section: Tensor Formats and Kernel Implementationsmentioning
confidence: 99%
“…e number of oating-point operations (#Flops) is 2M. e memory access in Table 1 counts 4M bytes for v because of its irregular and unpredictable memory access introduced by index-k. Data reuse of v could happen if its access has or gains a good localized pa ern naturally or from reordering techniques [44,57], similarly for the matrices in T and M…”
Section: Algorithm 1 Coo-t -Omp Algorithmmentioning
confidence: 99%
“…Choi and Vishwanathan addressed this problem by using a compressed representation of a tensor for each mode to reduce the amount of floating point operations, especially for sparse tensors. An efficient parallel sparse tensor‐matrix multiplication that uses different compressed representation was also proposed by Smith et al Recently, Chen et al proposed an improved version of Phan and Cichocki's parallel PARAFAC that better reflects the dynamics of large tensors, which can be implemented on a GPU cluster.…”
Section: Introductionmentioning
confidence: 99%
“…The communication overhead problem of sparse matrices multiplication was solved by B a l l a r d et al [8]. The parallelisation technique for sparse tensor matrix multiplication was proposed by S m i t h et al [9]. The above approaches [7][8][9] are not suitable for Big Data applications.…”
Section: Introductionmentioning
confidence: 99%
“…The parallelisation technique for sparse tensor matrix multiplication was proposed by S m i t h et al [9]. The above approaches [7][8][9] are not suitable for Big Data applications. Proper care should be taken by the programmer regarding the data distribution, replication, load balancing, communication overhead etc.…”
Section: Introductionmentioning
confidence: 99%