2015
DOI: 10.1007/978-3-319-18224-7_29
|View full text |Cite
|
Sign up to set email alerts
|

Fast Minimum Spanning Tree Based Clustering Algorithms on Local Neighborhood Graph

Abstract: Abstract. Minimum spanning tree (MST) based clustering algorithms have been employed successfully to detect clusters of heterogeneous nature. Given a dataset of n random points, most of the MST-based clustering algorithms first generate a complete graph G of the dataset and then construct MST from G. The first step of the algorithm is the major bottleneck which takes O(n 2 ) time. This paper proposes two algorithms namely MST-based clustering on K-means Graph and MST-based clustering on Bi-means Graph for redu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Clustering is the method of categorising data into groups or clusters such that objects within a cluster have a high degree of similarity to one another but are quite different from objects in other clusters. The term cluster analysis itself encompasses a number of different algorithms and methods (Tree Clustering [Lin et al (2018), Liu et al (2005), Freeman (2006), Ahmed et al (2011), Lv et al (2018b), Freeman (2007), WANG et al (2009), Buttrey & Whitaker (2015), Qiu & Li (2021), Jothi et al (2015), Page (1974), Vathy-Fogarassy et al (2005), Miller & Rose (1994)], Block Clustering, k-Means Clustering Wilkin & Huang (2007) and EM algorithms) for grouping objects of similar kind into respective categories, graph-based clustering (Bai et al (2017)), hierarchical clustering (Köhn & Hubert (2014)), model-based clustering (Fraley & Raftery (1998), Fraley & Raftery (1999), Fraley & Raftery (2002)); Lloyd's K-means clustering and the progressive greedy K-means clustering (Wilkin & Huang (2007)).…”
Section: On Clustering and The Random Potts Modelsmentioning
confidence: 99%
“…Clustering is the method of categorising data into groups or clusters such that objects within a cluster have a high degree of similarity to one another but are quite different from objects in other clusters. The term cluster analysis itself encompasses a number of different algorithms and methods (Tree Clustering [Lin et al (2018), Liu et al (2005), Freeman (2006), Ahmed et al (2011), Lv et al (2018b), Freeman (2007), WANG et al (2009), Buttrey & Whitaker (2015), Qiu & Li (2021), Jothi et al (2015), Page (1974), Vathy-Fogarassy et al (2005), Miller & Rose (1994)], Block Clustering, k-Means Clustering Wilkin & Huang (2007) and EM algorithms) for grouping objects of similar kind into respective categories, graph-based clustering (Bai et al (2017)), hierarchical clustering (Köhn & Hubert (2014)), model-based clustering (Fraley & Raftery (1998), Fraley & Raftery (1999), Fraley & Raftery (2002)); Lloyd's K-means clustering and the progressive greedy K-means clustering (Wilkin & Huang (2007)).…”
Section: On Clustering and The Random Potts Modelsmentioning
confidence: 99%
“…In a quest to improve recognition performance, the use of classical image descriptors such as the Bag-of-VisualWords (BOW) has been applied to different fields. BOW involves the extraction of features [6], [7] and construction of a codebook using an unsupervised learning algorithm such as Kmeans clustering [8], spectral clustering [9], local constrained linear coding for pooling clusters [10], and the use of the fast minimum spanning tree [11]. Finally, the extraction of feature vectors by the BOW approach can be achieved using a soft assignment scheme [12] or sparse ensemble learning methods [13].…”
Section: Introductionmentioning
confidence: 99%