2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2017
DOI: 10.1109/ipdps.2017.86
|View full text |Cite
|
Sign up to set email alerts
|

On Optimizing Distributed Tucker Decomposition for Dense Tensors

Abstract: The Tucker decomposition expresses a given tensor as the product of a small core tensor and a set of factor matrices. Apart from providing data compression, the construction is useful in performing analysis such as principal component analysis (PCA) and finds applications in diverse domains such as signal processing, computer vision and text analytics. Our objective is to develop an efficient distributed implementation for the case of dense tensors. The implementation is based on the HOOI (Higher Order Orthogo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(25 citation statements)
references
References 14 publications
0
25
0
Order By: Relevance
“…MET (Memory Efficient Tucker) [14] tackles this challenge by adaptively ordering computations and performing them in a piecemeal [19] discuss a shared and distributed memory parallelization of an ALS-based TF for sparse tensors. [41] proposes optimizations of HOOI for dense tensors on distributed systems. The above methods depend on SVD for updating factor matrices, while P-TUCKER utilizes a row-wise update rule.…”
Section: Related Workmentioning
confidence: 99%
“…MET (Memory Efficient Tucker) [14] tackles this challenge by adaptively ordering computations and performing them in a piecemeal [19] discuss a shared and distributed memory parallelization of an ALS-based TF for sparse tensors. [41] proposes optimizations of HOOI for dense tensors on distributed systems. The above methods depend on SVD for updating factor matrices, while P-TUCKER utilizes a row-wise update rule.…”
Section: Related Workmentioning
confidence: 99%
“…The TTM component of the framework is a special case of the MET approach, wherein no intermediate tensors are computed. For the Tucker decomposition of dense tensors, MATLAB [2], single-machine [30] and distributed [1,6] implementations have been proposed. Prior work has also studied the Tucker decomposition on the MapReduce platform [11].…”
Section: Procedurementioning
confidence: 99%
“…For an N -dimensional tensor, the traditional methods necessitate the multiplication of the input tensor with O(N 2 ) matrices in each iteration of these algorithm, which gets very expensive as the dimensionality of tensor increases. Efficiently carrying them out for higher dimensional tensors has been the focus of recent work [8,15,18]. Specifically, [14] and [18] investigate the use of a data structure called dimension tree in order to reduce the number of such multiplications to N log N within an iteration of HOOI and CP-ALS algorithms, respectively, for sparse tensors.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, [14] and [18] investigate the use of a data structure called dimension tree in order to reduce the number of such multiplications to N log N within an iteration of HOOI and CP-ALS algorithms, respectively, for sparse tensors. For dense tensors, the sheer number of multiplications is not the most precise cost metric, and for this reason Choi et al [8] investigate the use of an optimal dimension tree structure, computed in O(4 N ) time using O(3 N ) space, that potentially performs more TTMs in total, yet yields the lowest actual operation count possible.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation