2016 6th Workshop on Irregular Applications: Architecture and Algorithms (IA3) 2016
DOI: 10.1109/ia3.2016.014
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of Parallel Sparse Tensor Decomposition Implementations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…This is attributed to its unusually short dimensions; the presented methods parallelize over the outer dimensions of the tensor and thus have idle threads when the outer dimension is small. This limitation has also been observed in other tensor kernels [18], and has been remedied via alternative parallel decompositions [2,25]. Exploring these alternative decompositions is left to future work.…”
Section: Resultsmentioning
confidence: 77%
“…This is attributed to its unusually short dimensions; the presented methods parallelize over the outer dimensions of the tensor and thus have idle threads when the outer dimension is small. This limitation has also been observed in other tensor kernels [18], and has been remedied via alternative parallel decompositions [2,25]. Exploring these alternative decompositions is left to future work.…”
Section: Resultsmentioning
confidence: 77%
“…Therefore, we can only directly compare the SPLATT results. SPLATT's average speed-up has increased by a factor of 1.14 over the previous results, largely because of the improvement on the VAST 3D data set (denoted as VAST 2015 MC 1 in Table III from [14]). The poor performance of SPLATT was due to the short third dimension of the tensor, being only two in size, which negatively affected CSF-based computations [12].…”
Section: Data Structures For Tensor Storagementioning
confidence: 80%
“…We can compare the strong scaling results in Table 2 with those in Table III from our previous work [14] to observe the recent improvements made to SPLATT. It should be noted that the ENSIGN results presented in our previous evaluation were from an entirely different implementation ENSIGN than the version that we evaluate in this paper.…”
Section: Data Structures For Tensor Storagementioning
confidence: 93%
See 1 more Smart Citation
“…For example, tensors used in context-aware recommendation will have many users but only a few contexts (e.g., time or location of purchase). Indeed, the recent performance evaluation by Rolinger et al [18] showed that CSF-based computation is severely impacted by tensors with short modes. Second, tiling each tensor mode to avoid synchronization faces serious scalability issues when there are many threads or many tensor modes.…”
Section: Many-core Sparse Tensor Factorizationmentioning
confidence: 99%