2020
DOI: 10.1007/s13042-020-01193-5
|View full text |Cite
|
Sign up to set email alerts
|

DISCERN: diversity-based selection of centroids for k-estimation and rapid non-stochastic clustering

Abstract: As one of the most ubiquitously applied unsupervised learning methods, clustering has also been known to have a few disadvantages. More specifically, parameters such as the number of clusters and neighborhood radius are usually unknown and hard to estimate in practical cases. Moreover, the stochastic nature of a great number of these algorithms is also a considerable point of weakness. In order to address these issues, we propose DISCERN which can serve as an initialization algorithm for K-Means, finding suita… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…Contrastive learning [31] is a discriminative paradigm in self-supervised learning [44] that allows the utilization of selfdefined pseudo labels as supervision to learn representations that are useful for various downstream tasks. It aims to encourage similar samples to cluster together while pushing diverse samples apart from each other.…”
Section: Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Contrastive learning [31] is a discriminative paradigm in self-supervised learning [44] that allows the utilization of selfdefined pseudo labels as supervision to learn representations that are useful for various downstream tasks. It aims to encourage similar samples to cluster together while pushing diverse samples apart from each other.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…To achieve in-depth mining of underlying logic, we use CompGCN [21] to update structural representations and employ self-attention [30] to iteratively capture local temporal features, which can ensure historical decay while avoiding the issue of feature stagnation caused by temporal sparsity. By incorporating localglobal features into contrastive learning [31], we generate a predictive mask vector that helps to narrow down the prediction scope and thus improves prediction accuracy.…”
Section: Introductionmentioning
confidence: 99%