2017
DOI: 10.1109/tcyb.2016.2585355
|View full text |Cite
|
Sign up to set email alerts
|

Graph Regularized Non-Negative Low-Rank Matrix Factorization for Image Clustering

Abstract: Non-negative matrix factorization (NMF) has been one of the most popular methods for feature learning in the field of machine learning and computer vision. Most existing works directly apply NMF on high-dimensional image datasets for computing the effective representation of the raw images. However, in fact, the common essential information of a given class of images is hidden in their low rank parts. For obtaining an effective low-rank data representation, we in this paper propose a non-negative low-rank matr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 168 publications
(59 citation statements)
references
References 55 publications
0
59
0
Order By: Relevance
“…To tackle the limitations of the pairwise relation based methods, the hyper-graph relation [63], [64], [65] is proposed and the related literature follows two different directions. Some transform the hyper-correlation into a simpler pairwise graph [21], [66], followed by a standard graph clustering method, e.g., normalized cut [11], to calculate the assignments.…”
Section: Hyper-graph Clusteringmentioning
confidence: 99%
“…To tackle the limitations of the pairwise relation based methods, the hyper-graph relation [63], [64], [65] is proposed and the related literature follows two different directions. Some transform the hyper-correlation into a simpler pairwise graph [21], [66], followed by a standard graph clustering method, e.g., normalized cut [11], to calculate the assignments.…”
Section: Hyper-graph Clusteringmentioning
confidence: 99%
“…In the Euclidean space, the standard nonnegative matrix factorization in Eq. (2) fails to discover the intrinsic geometrical and discriminating structure of the data space [65,66].…”
Section: The Model Of Gnmflmimentioning
confidence: 99%
“…{0.0001,0.001,0.01} for and the ranges of = ∈ {0.001,0.01,0.1} . Based on the studies of Cai et al [46] and Li et al [65], we set = 5. Finally, the parameter values are = 5, k=80, = 0.01, = = 0.1.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Therefore, its resulting sparse codes are more suitable for the clustering purpose. In addition, our post-processing technique can contribute to non-negative SSC methods such as [18,14,15] to improve their latent representations.…”
Section: Related Workmentioning
confidence: 99%