Abstract:Non-negative matrix factorization (NMF) provides a lower rank approximation of a matrix. Due to nonnegativity imposed on the factors, it gives a latent structure that is often more physically meaningful than other lower rank approximations such as singular value decomposition (SVD). Most of the algorithms proposed in literature for NMF have been based on minimizing the Frobenius norm. This is partly due to the fact that the minimization problem based on the Frobenius norm provides much more flexibility in alge… Show more
“…Depending on the probabilistic model of the underlying data, NMF can be formulated with various divergences. Formulations and algorithms based on Kullback-Leibler divergence [67,79], Bregman divergence [24,68], Itakura-Saito divergence [29], and Alpha and Beta divergences [21,22] have been developed. For discussion on nonnegative rank as well as the geometric interpretation of NMF, see Lin and Chu [72], Gillis [34], and Donoho and Stodden [27].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Matrices W and H are found by solving an optimization problem defined with Frobenius norm, Kullback-Leibler divergence [67,68], or other divergences [24,68]. In this paper, we focus on the NMF based on Frobenius norm, which is the most commonly used formulation:…”
We review algorithms developed for nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) from a unified view based on the block coordinate descent (BCD) framework. NMF and NTF are low-rank approximation methods for matrices and tensors in which the low-rank factors are constrained to have only nonnegative elements. The nonnegativity constraints have been shown to enable natural interpretations and allow better solutions in numerous applications including text analysis, computer vision, and bioinformatics. However, the computation of NMF and NTF remains challenging and expensive due the constraints. Numerous algorithmic approaches have been proposed to efficiently compute NMF and NTF. The BCD framework in constrained non-linear optimization readily explains the theoretical convergence properties of several efficient NMF and NTF algorithms, which are consistent with experimental observations reported in literature. In addition, we discuss algorithms that do not fit in the BCD framework contrasting them from those based on the BCD framework. With insights acquired from the unified perspective, we also propose efficient algorithms for updating NMF when there is a small change in the reduced dimension or in the data. The effectiveness of the proposed updating algorithms are validated experimentally with synthetic and real-world data sets.
“…Depending on the probabilistic model of the underlying data, NMF can be formulated with various divergences. Formulations and algorithms based on Kullback-Leibler divergence [67,79], Bregman divergence [24,68], Itakura-Saito divergence [29], and Alpha and Beta divergences [21,22] have been developed. For discussion on nonnegative rank as well as the geometric interpretation of NMF, see Lin and Chu [72], Gillis [34], and Donoho and Stodden [27].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Matrices W and H are found by solving an optimization problem defined with Frobenius norm, Kullback-Leibler divergence [67,68], or other divergences [24,68]. In this paper, we focus on the NMF based on Frobenius norm, which is the most commonly used formulation:…”
We review algorithms developed for nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) from a unified view based on the block coordinate descent (BCD) framework. NMF and NTF are low-rank approximation methods for matrices and tensors in which the low-rank factors are constrained to have only nonnegative elements. The nonnegativity constraints have been shown to enable natural interpretations and allow better solutions in numerous applications including text analysis, computer vision, and bioinformatics. However, the computation of NMF and NTF remains challenging and expensive due the constraints. Numerous algorithmic approaches have been proposed to efficiently compute NMF and NTF. The BCD framework in constrained non-linear optimization readily explains the theoretical convergence properties of several efficient NMF and NTF algorithms, which are consistent with experimental observations reported in literature. In addition, we discuss algorithms that do not fit in the BCD framework contrasting them from those based on the BCD framework. With insights acquired from the unified perspective, we also propose efficient algorithms for updating NMF when there is a small change in the reduced dimension or in the data. The effectiveness of the proposed updating algorithms are validated experimentally with synthetic and real-world data sets.
“…In a generalized way, the Bregman divergence D 蠒 (D D ) is used as the objective function to be minimized [21,22]. Considering only separable Bregman divergences, (5) where 蠒(.)…”
“…To enable such a column-wise update even in ONMF, we derive a set of column-wise orthogonal constraints, taking into consideration both nonnegativity and orthogonality at the same time. Furthermore, we show that the columnwise orthogonal constraint can also be applied to column-wise update algorithms called scalar Block Coordinate Descent for solving Bregman divergence NMF (sBCD-NMF) (Li et al 2012) where the Frobenius norm in (1) is replaced with more general Bregman divergence (Li et al 2012). This sBCD-ONMF algorithm is the first algorithm to solve ONMF with Bregman divergence.…”
Section: Introductionmentioning
confidence: 99%
“…In Sect. 4, we incorporate the column-wise orthogonal constraint into sBCD-NMF proposed by Li et al (2012) in order to propose sBCD-ONMF algorithm. In Sect.…”
Recently orthogonal nonnegative matrix factorization (ONMF), imposing an orthogonal constraint into NMF, has been attracting a great deal of attention. ONMF is more appropriate than standard NMF for a clustering task because the constrained matrix can be considered as an indicator matrix. Several iterative ONMF algorithms have been proposed, but they suffer from slow convergence because of their matrix-wise updating. In this paper, therefore, a column-wise update algorithm is proposed for speeding up ONMF. To make the idea possible, we transform the matrix-based orthogonal constraint into a set of column-wise orthogonal constraints. The algorithm is stated first with the Frobenius norm and then with Bregman divergence, both for measuring the degree of approximation. Experiments on one artificial and six real-life datasets showed that the proposed algorithms converge faster than the other conventional ONMF algorithms, more than four times in the best cases, due to their smaller numbers of iterations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.