2003
DOI: 10.1007/3-540-44938-8_31
|View full text |Cite
|
Sign up to set email alerts
|

A New Ensemble Diversity Measure Applied to Thinning Ensembles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2004
2004
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(43 citation statements)
references
References 9 publications
0
43
0
Order By: Relevance
“…The work presented here extends our previous work [8] by testing additional thinning methods, using more datasets, and evaluating diversity and thinning algorithms on ensembles created by several different algorithms.…”
Section: Introductionmentioning
confidence: 71%
See 2 more Smart Citations
“…The work presented here extends our previous work [8] by testing additional thinning methods, using more datasets, and evaluating diversity and thinning algorithms on ensembles created by several different algorithms.…”
Section: Introductionmentioning
confidence: 71%
“…We reference the terminology described in [8] to describe the removal of decision trees from a forest. Since this process can be likened to "thinning a forest", we call it "thinning.…”
Section: A Review Of Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Though it does not concern generalisation performance, a surprising result is that the number of trees in the "best" subset found during the selection process is often very small regarding to the size of the initial forest, sometimes even approaching 30% of the amount of available trees (Musk, Segment and Vehicle). Classifier selection has already shown to be a powerful tool for obtaining significant improvement with ensemble of classifiers [25][26][27], but this result leads us to think that it would be interesting to further focus on the number of trees that have to be grown in a forest to obtain significant improvement comparing to RF induced with Forest-RI, and rather according to generalisation accuracy.…”
Section: M(i) ← Run a Mcnemar Test Of Significance With Classifiers (mentioning
confidence: 99%
“…Margineantu and Dietterich 30 used kappa statistic to prune the adaptive boosting ensemble to a prespecified size. A similar approach called "thinning the ensemble" can be found in the study by Banfield et al 31 All these techniques start with a set of subclassifiers and select a subset to construct the final ensemble. Note that the size of the initial ensemble needs to be specified by the designer.…”
Section: Introductionmentioning
confidence: 99%