Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2004
DOI: 10.1016/j.patrec.2003.12.018
|View full text |Cite
|
Sign up to set email alerts
|

A clustering method based on boosting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
46
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 67 publications
(47 citation statements)
references
References 4 publications
0
46
0
Order By: Relevance
“…The values are the average and standard deviation of NMI over 50 independent runs of the algorithms. We also implemented the algorithms proposed in [1,2] and compared their performance with our method. Additionally, we applied different consensus function learning methods, namely voting [20], HGPA, and MCLA [5], to the final ensemble.…”
Section: Datasetsmentioning
confidence: 99%
See 3 more Smart Citations
“…The values are the average and standard deviation of NMI over 50 independent runs of the algorithms. We also implemented the algorithms proposed in [1,2] and compared their performance with our method. Additionally, we applied different consensus function learning methods, namely voting [20], HGPA, and MCLA [5], to the final ensemble.…”
Section: Datasetsmentioning
confidence: 99%
“…The next four columns are the performance of ETree itself together with using it as the base model of our boosting algorithm (CB) and applying voting [20], HGPA, and MCLA [5] consensus function learning methods. The next two columns indicate the results of using the methods of Frossyniotis et al [1] and Topchy et al [2] with ETree. For those methods we only report their best performance over applying different consensus functions.…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…Some authors have proposed to combine different clustering algorithms in a boosting-like framework. For instance [20] and [10] introduced weighted re-sampling of data points according to how reliably they are classified. In [16], a general iterative clustering algorithm is presented that combines several algorithms by keeping track both of point weights and of a membership coefficient for each pair of a point and a model.…”
Section: Introductionmentioning
confidence: 99%