2015
DOI: 10.1109/tsp.2015.2454476
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Algorithms for Constrained Tensor Factorization via Alternating Direction Method of Multipliers

Abstract: Tensor factorization has proven useful in a wide range of applications, from sensor array processing to communications, speech and audio signal processing, and machine learning. With few recent exceptions, all tensor factorization algorithms were originally developed for centralized, in-memory computation on a single machine; and the few that break away from this mold do not easily incorporate practically important constraints, such as nonnegativity. A new constrained tensor factorization framework is proposed… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(63 citation statements)
references
References 31 publications
0
63
0
Order By: Relevance
“…There, stochastic gradient descent (SGD) could be a promising candidate for such a family of SDF solvers due to their simplicity and aptitude for large data sets. In addition, substantial progress has recently been made in the computation of tensor decompositions on Hadoop with algorithms based on alternating least squares (ALS) [69]- [71]. Third, let denote the column-wise vectorization of its argument and let be the number of elements in a tensor .…”
Section: Structured Data Fusionmentioning
confidence: 99%
“…There, stochastic gradient descent (SGD) could be a promising candidate for such a family of SDF solvers due to their simplicity and aptitude for large data sets. In addition, substantial progress has recently been made in the computation of tensor decompositions on Hadoop with algorithms based on alternating least squares (ALS) [69]- [71]. Third, let denote the column-wise vectorization of its argument and let be the number of elements in a tensor .…”
Section: Structured Data Fusionmentioning
confidence: 99%
“…Nevertheless, it has been observed by many researchers that the ADMM works extremely well for various applications involving nonconvex objectives, such as the nonnegative matrix factorization [37,38], phase retrieval [39], distributed matrix factorization [40], distributed clustering [41], sparse zero variance discriminant analysis [42], polynomial optimization [43], tensor decomposition [44], matrix separation [45], matrix completion [46], asset allocation [47], sparse feedback control [48] and so on. However, to the best of our knowledge, existing convergence analysis of ADMM for nonconvex problems is very limited -all known global convergence analysis needs to impose uncheckable conditions on the sequence generated by the algorithm.…”
mentioning
confidence: 99%
“…Therefore, rough estimates of its computational complexity can be easily derived [24]. The estimate for the update of B according to Eq.…”
Section: Time Complexitymentioning
confidence: 99%