2014
DOI: 10.1007/s10044-013-0364-4
|View full text |Cite
|
Sign up to set email alerts
|

A clustering ensemble framework based on selection of fuzzy weighted clusters in a locally adaptive clustering algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 61 publications
(31 citation statements)
references
References 25 publications
0
31
0
Order By: Relevance
“…The clustering ensemble approaches with homogenous clustering algorithms employ a same clustering algorithm during generation of the ensemble pool, that is, all partitions of the ensemble pool are generated by a same clustering algorithm. The partitions of the ensemble pool in homogenous clustering algorithms can be produced by one of the following subtypes: by employing different initializations of a given clustering algorithm , by employing different parameters (like different numbers of clusters) for data clustering using a same clustering algorithm , by employing different data projections for data clustering using a same clustering algorithm , by employing different subsets of dataset features for data clustering using a same clustering algorithm , by employing meta heuristic algorithms for data clustering , and by employing different datasets for data clustering using a same clustering algorithm. …”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The clustering ensemble approaches with homogenous clustering algorithms employ a same clustering algorithm during generation of the ensemble pool, that is, all partitions of the ensemble pool are generated by a same clustering algorithm. The partitions of the ensemble pool in homogenous clustering algorithms can be produced by one of the following subtypes: by employing different initializations of a given clustering algorithm , by employing different parameters (like different numbers of clusters) for data clustering using a same clustering algorithm , by employing different data projections for data clustering using a same clustering algorithm , by employing different subsets of dataset features for data clustering using a same clustering algorithm , by employing meta heuristic algorithms for data clustering , and by employing different datasets for data clustering using a same clustering algorithm. …”
Section: Related Workmentioning
confidence: 99%
“…The whole of empirical investigations are accomplished using Matlab2015. The proposed technique is evaluated against a portion of the best strategies in the field such as: Hybrid Bi‐Partite Graph Formulation ( HB _ PGF ) , Sim‐Rank Similarity ( SRS ) , Weighted‐Connected Triple ( W _ CT ) , Cluster Selection‐Evidence Accumulation Clustering ( CS _ EAC ) , Weighted‐Evidence Accumulation Clustering ( W _ EAC ) , Wisdom of Crowds Ensemble ( WCE ) , Graph Partitioning with Multi‐Granularity Link Analysis ( GPM _ GLA ) , and Two_level Co‐Association Matrix Ensemble ( TCAME ) , Elite Cluster Selection‐Evidence Accumulation Clustering ( ECS _ EAC ) , Cluster‐Level Weighting‐Graph Clustering ( CLW _ GC ) , and Robust Clustering Ensemble based on Iterative Fusion of base Clusters ( RCEIFC ) . These techniques utilize the default suggestions of parameters by their relating authors.…”
Section: Experimentationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In fuzzy clustering, a novel method is developed for selecting the best fuzzy base clustering by defining a new fuzzy diversity measure and a fuzzy quality measure to prove the results have improved, compared with other fuzzy clustering ensemble methods . Furthermore, the fuzzy weighted locally adaptive clustering algorithm (FWLAC) was introduced to select features with more information than the others . Finding the best parameters with FWLAC is an issue although the performance of FWLAC is not dependent on finding these parameters.…”
Section: Introductionmentioning
confidence: 99%
“…Hence, two methods are suggested to solve this problem; either the entropy of the outputs is taken directly or an ensemble in the unstable area is taken. To obtain better results than the full ensemble, the parameters must take place in the unstable area . One article proposed two large‐scale clustering algorithms: ultrascalable spectral clustering (U‐SPEC) and ultrascalable ensemble clustering (U‐SENC) .…”
Section: Introductionmentioning
confidence: 99%