2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07 2007
DOI: 10.1109/icassp.2007.367106
|View full text |Cite
|
Sign up to set email alerts
|

Non-Negative Tensor Factorization using Alpha and Beta Divergences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
68
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 72 publications
(68 citation statements)
references
References 7 publications
0
68
0
Order By: Relevance
“…Now the components a (n) j n are updated using their current values and the contracted product of the error tensor E. The error tensor E can be updated using an efficient update rule which is derived from (28) and (29) as…”
Section: Learning Rule For Factors a (N)mentioning
confidence: 99%
See 1 more Smart Citation
“…Now the components a (n) j n are updated using their current values and the contracted product of the error tensor E. The error tensor E can be updated using an efficient update rule which is derived from (28) and (29) as…”
Section: Learning Rule For Factors a (N)mentioning
confidence: 99%
“…For Nonnegative Tucker Decomposition (NTD), the multiplicative algorithms [1,16,25,26] are natural extensions of multiplicative Nonnegative Matrix Factorization (NMF) algorithms based on minimization of the squared Euclidean distance (Frobenius norm) and the Kullback-Leibler divergence. These cost functions have been recently generalized and extended using the Bregman, Csiszár, and Alpha-and Beta-divergences [1,25,27,28]. The multiplicative algorithms have a relatively low complexity but they are characterized by rather slow convergence and they sometimes converge to spurious local minima.…”
Section: Introductionmentioning
confidence: 99%
“…Welling, M. and Weber, M. [26] proposed a non-negative tensor factorization algorithm based on the multiplicative updating rule [14], a natural extension of matrix factorization; Kim et al [11] proposed a non-negative tensor factorization algorithm based on the alternating large-scale non-negativity constrained least squares; A non-negative sparse factorization algorithm using Kullback-Leibler divergence [13] was introduced by FitzGerald et al [20]; Mørup et al [16] proposed a SNTF algorithm based on the parafac model, which employs the L 1 norm penalty as the sparseness constraint like ours, and applies the multiplicative updating rule like most other algorithms [6,20]. They further employ over relaxed bound optimization strategy to accelerate the computing speed; Recently, Cichocki et al [5] presented an algorithm using alpha and beta divergences.…”
Section: Related Workmentioning
confidence: 99%
“…Tensors (also known as n-way arrays) are used in a variety of applications ranging from neuroscience and psychometrics to chemometrics [1][2][3]. From a viewpoint of data analysis, tensor decomposition is very attractive because it takes into account spatial and temporal correlations between variables more accurately than 2D matrix factorizations, and it usually provides sparse common factors or hidden components with physiological meaning and interpretation.…”
Section: Introductionmentioning
confidence: 99%