2016 23rd International Conference on Pattern Recognition (ICPR) 2016
DOI: 10.1109/icpr.2016.7899884
|View full text |Cite
|
Sign up to set email alerts
|

Context aware nonnegative matrix factorization clustering

Abstract: In this article we propose a method to refine the clustering results obtained with the nonnegative matrix factorization (NMF) technique, imposing consistency constraints on the final labeling of the data. The research community focused its effort on the initialization and on the optimization part of this method, without paying attention to the final cluster assignments. We propose a game theoretic framework in which each object to be clustered is represented as a player, which has to choose its cluster members… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…This approach is synergistic with iNMF, which reconstructs each dataset separately, accurately preserving the structure of individual datasets. A recent paper used a related idea to refine cluster assignments from NMF by taking into account the neighborhood of each data point (Tripodi, 2016). Additionally, our graph construction greatly reduces the chances of spurious matches across datasets, because even if a cell type spuriously loads on the same factor as a different cell type in another dataset, they are unlikely to have the same factor neighborhoods.…”
Section: Shared Factor Neighborhood Clustering and Factor Normalizationmentioning
confidence: 99%
“…This approach is synergistic with iNMF, which reconstructs each dataset separately, accurately preserving the structure of individual datasets. A recent paper used a related idea to refine cluster assignments from NMF by taking into account the neighborhood of each data point (Tripodi, 2016). Additionally, our graph construction greatly reduces the chances of spurious matches across datasets, because even if a cell type spuriously loads on the same factor as a different cell type in another dataset, they are unlikely to have the same factor neighborhoods.…”
Section: Shared Factor Neighborhood Clustering and Factor Normalizationmentioning
confidence: 99%
“…where f i , f j are the features of observations i and j respectively, d(f i , f j ) is the cosine distance between features f i and f j . Here, motivated by [24] and [5], we set the scaling parameter σ i automatically, considering the local statistics of the neighborhood of each point. Accordingly to [24], the value of σ i is set to the distance of the 7-th nearest neighbour of observation i. d) Affinity sparsification: The sparsification of the graph plays an important role in the performances of the algorithm.…”
Section: Algorithm 1 Gtda Algorithmmentioning
confidence: 99%
“…However, the topic assignment part of the algorithm received a smaller attention from the research community [3]. This is because the assessment of how well the topic assignment being done is subjective.…”
Section: Introductionmentioning
confidence: 99%
“…This is usually the final step of topic modeling and is the step that we focus on in this paper. Although NMF and LDA themselves are carefully studied and there are multiple algorithms to solve them, this final step of assigning topics receives less attention from the research community [3]. The assessment of this step is usually done by considering the top words (words with high frequency) of each topic and decided by eyeballing to see if they naturally make sense.…”
Section: Topic Assignmentmentioning
confidence: 99%
See 1 more Smart Citation