2017
DOI: 10.1016/j.neucom.2017.04.068
|View full text |Cite
|
Sign up to set email alerts
|

Upper bound of Bayesian generalization error in non-negative matrix factorization

Abstract: Non-negative matrix factorization ( NMF ) is a new knowledge discovery method that is used for text mining, signal processing, bioinformatics, and consumer analysis. However, its basic property as a learning machine is not yet clarified, as it is not a regular statistical model, resulting that theoretical optimization method of NMF has not yet established. In this paper, we study the real log canonical threshold of NMF and give an upper bound of the generalization error in Bayesian learning. The results show t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
6

Relationship

4
2

Authors

Journals

citations
Cited by 12 publications
(23 citation statements)
references
References 18 publications
(36 reference statements)
0
22
0
1
Order By: Relevance
“…Therefore, if the prior distribution is gamma distribution, the Main Theorem is valid not only in the case of the probability model and the true distribution are Poisson distributions but also in that of they are normal distributions, exponential distributions, and Bernoulli distributions. Indeed, if the hyperparameters are φ U = φ V = 1, then the prior is strictly positive and bounded, and the upper bound equals the result of [9]. Thus this study gives an extension to the case where the prior distribution is a gamma distribution of the main theorem of the previous work [9].…”
Section: Robustness On Probability Distributionsmentioning
confidence: 73%
See 3 more Smart Citations
“…Therefore, if the prior distribution is gamma distribution, the Main Theorem is valid not only in the case of the probability model and the true distribution are Poisson distributions but also in that of they are normal distributions, exponential distributions, and Bernoulli distributions. Indeed, if the hyperparameters are φ U = φ V = 1, then the prior is strictly positive and bounded, and the upper bound equals the result of [9]. Thus this study gives an extension to the case where the prior distribution is a gamma distribution of the main theorem of the previous work [9].…”
Section: Robustness On Probability Distributionsmentioning
confidence: 73%
“…In the same way as [9], K(U, V ) ∼ Φ(U, V ) follows. Thus we consider the zero points of Φ(U, V ) and ϕ(U, V ).…”
Section: Appendix a Proof Sketch Of Lemmasmentioning
confidence: 84%
See 2 more Smart Citations
“…Several methods of NMF are discussed here, which include: Semi supervised constrained NMF [19], semisupervised graph based discriminative NMF [20], Bayesian learning approach to reduce the generalization error in upper bound using NMF [21] and update rules [22], sparseness NMF, which provides better characterization of the features [23], sparse unmixing NMF [24], locally weighted sparse graph regularized NMF [25], graph-regularized NMF [26], graph dual regularization [27], multiple graph regularized NMF [28], graph regularized multilayer NMF [29], adaptive graph regularized NMF [30], hyper-graph regularized [31], graph regularization with sparse NMF [32], multi-view NMF [33], extended incremental NMF [34], incremental orthogonal projective NMF [35], correntropy induced metric NMF [36], multi-view NMF [37], patch based NMF [38], MMNMF [39], regularized NMF [40], FR conjugate gradient NMF [41]. However, these methods failed to address the problems associated with non-orthogonality due to the presence of nonnegative elements in NMF.…”
Section: Related Workmentioning
confidence: 99%