2017
DOI: 10.1109/tpami.2016.2554555
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Matrix Factorization Method for Learning Attribute Representations

Abstract: Abstract-Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies can not interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
269
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 325 publications
(274 citation statements)
references
References 46 publications
0
269
0
Order By: Relevance
“…Another important issue that can affect the performance of pre-classification is the structure of Deep Semi-NMF. As empirically verified in [36], the two-layer model is generally sufficient to achieve good performance, and a higher number of layers does not seem to lead to significant improvements. Therefore, we implement Deep Semi-NMF with a first hidden representation H 1 with 2h 2 /3 features, and a second representation H 2 with h 2 /2 features.…”
Section: Strategy For Pre-classificationmentioning
confidence: 72%
See 2 more Smart Citations
“…Another important issue that can affect the performance of pre-classification is the structure of Deep Semi-NMF. As empirically verified in [36], the two-layer model is generally sufficient to achieve good performance, and a higher number of layers does not seem to lead to significant improvements. Therefore, we implement Deep Semi-NMF with a first hidden representation H 1 with 2h 2 /3 features, and a second representation H 2 with h 2 /2 features.…”
Section: Strategy For Pre-classificationmentioning
confidence: 72%
“…By further factorizing the mapping W, we can automatically learn the hidden representations and obtain better higher-level feature representations of the original data matrix V. Inspired by this, Trigeorgis et al [36] proposed Deep Semi-NMF, which applies Semi-NMF to a multi-layer structure in order to learn hidden representations of the original data matrix. Specifically, the input data matrix is factorized into m + 1 factors as follows:…”
Section: Deep Semi-nmfmentioning
confidence: 99%
See 1 more Smart Citation
“…The deep learning reaches the two types of approaches. As for the latent factor approaches, the deep learning is applied for improving the performance of several algorithms such as factorization machine, matrix factorization, probabilistic matrix factorization, and K nearest neighbors' algorithm [25]- [28].…”
Section: Deep Collaborative Filtering Recommendationmentioning
confidence: 99%
“…Hence, a single optimal manifold cannot well describe the intrinsic structure of the original space. Although some studies exploited the intrinsic manifold structure by constructing the neighbour graph with different weighting schemes (i.e., 0-1, heat kernel and dot product) [23,24], the representation is relatively weak without considering the property of local neighbours. Consequently, the discriminative ability of the learned features is weak, which further influences the performance of target recognition.…”
Section: Introductionmentioning
confidence: 99%