2018
DOI: 10.1016/j.neucom.2018.03.059
|View full text |Cite
|
Sign up to set email alerts
|

Large-scale k-means clustering via variance reduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

3
7

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…Seven clustering algorithms from all four types are used to cluster the different locations. Any particular clustering technique requires the number of clusters to be specified as an input, such as K-mean and Mini batch K-mean clustering [16]. For these clustering types, the first elbow curve is generated to get the optimal value of k. Affinity propagation and Mean shift are partitioning-based clusters, but the difference is that these methods don't require the number of clusters to be specified advance.…”
Section: A Backgroundmentioning
confidence: 99%
“…Seven clustering algorithms from all four types are used to cluster the different locations. Any particular clustering technique requires the number of clusters to be specified as an input, such as K-mean and Mini batch K-mean clustering [16]. For these clustering types, the first elbow curve is generated to get the optimal value of k. Affinity propagation and Mean shift are partitioning-based clusters, but the difference is that these methods don't require the number of clusters to be specified advance.…”
Section: A Backgroundmentioning
confidence: 99%
“…Compared to the traditional clustering methods, it has a convex formulation, leading to a robust clustering result. For example, the clustering result of k-means is sensitive to the seeds, and picking good seeds is challenging [3], [14]. However, due to a convex objective function, the result of convex clustering can be determined.…”
Section: Convex Clusteringmentioning
confidence: 99%
“…It is common for existing kernel-based methods to reveal the underlying structure of data by calculating the pairwise similarity between samples. However, in many successful machine learning algorithms, such as dimensionality reduction [34], [35], clustering [16], [36], and recent feature selection algorithms [32], [37], [38], researchers find that it is beneficial to preserve only the reliable local geometry as a representation of the data structure. There are two main underlying reasons.…”
Section: A Construction Of the Neighbor Kernelmentioning
confidence: 99%