2009
DOI: 10.1007/978-3-642-01973-9_45
|View full text |Cite
|
Sign up to set email alerts
|

A Parallel Nonnegative Tensor Factorization Algorithm for Mining Global Climate Data

Abstract: Abstract. Increasingly large datasets acquired by NASA for global climate studies demand larger computation memory and higher CPU speed to mine out useful and revealing information. While boosting the CPU frequency is getting harder, clustering multiple lower performance computers thus becomes increasingly popular. This prompts a trend of parallelizing the existing algorithms and methods by mathematicians and computer scientists. In this paper, we take on the task of parallelizing the Nonnegative Tensor Factor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 13 publications
(15 reference statements)
0
15
0
Order By: Relevance
“…1. NTF factorizes a tensor X into factor matrices T , U , V been a lot of NTF researches concerning sparse constraints [6], [17] and acceleration techniques [7], [18]. As explained, data sparsity becomes a serious problem for large or high order tensors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…1. NTF factorizes a tensor X into factor matrices T , U , V been a lot of NTF researches concerning sparse constraints [6], [17] and acceleration techniques [7], [18]. As explained, data sparsity becomes a serious problem for large or high order tensors.…”
Section: Related Workmentioning
confidence: 99%
“…The non-negativity constraints yield sparse and reasonably interpretable factorization results [3]. NTF has been applied and preforms well in various fields [4]- [7]. An advantage of tensor data over conventionally studied matrix data is its ability to represent observations with various attributes.…”
Section: Introductionmentioning
confidence: 99%
“…Then the factors for the complete tensor are estimated from these sub-factors, in parallel, using special update rules. [27] introduces various parallelization strategies for speeding up factor matrix update step in alternating least squares (ALS) based decomposition. Authors propose techniques for distributing a large tensor onto the servers in a cluster, minimizing data exchange, and limiting the memory needed for storing matrices or tensors.…”
Section: Related Workmentioning
confidence: 99%
“…In [17] and [23] we find two interesting approaches, where a tensor is viewed as a stream and the challenge is to track the decomposition. In terms of parallel algorithms, [26] introduces a parallel Non-negative Tensor Factorization. Finally, [24,22] propose randomized, sampling based Tucker3 decompositions.…”
Section: Related Workmentioning
confidence: 99%