2004
DOI: 10.1007/978-3-540-25966-4_22
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Ensemble Creation Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 36 publications
(27 citation statements)
references
References 9 publications
0
26
0
Order By: Relevance
“…Then we used Breiman's random forest (RF) algorithm [23], with 250 unpruned trees per partition with both unweighted and weighted (RFW) predictions. Its accuracy was evaluated in [24] and shown to be comparable with or better than other well known ensemble generation techniques. The number of random features chosen at each decision tree node was log 2 n + 1 given n features.…”
Section: Classification Systemmentioning
confidence: 95%
“…Then we used Breiman's random forest (RF) algorithm [23], with 250 unpruned trees per partition with both unweighted and weighted (RFW) predictions. Its accuracy was evaluated in [24] and shown to be comparable with or better than other well known ensemble generation techniques. The number of random features chosen at each decision tree node was log 2 n + 1 given n features.…”
Section: Classification Systemmentioning
confidence: 95%
“…The final classifier, additionally, usually achieves a high degree of accuracy in the test set as various authors have shown both theoretically and empirically (Banfield et al, 2004;Bauer & Kohavi, 1999;Dietterich, 2000;Drucker & Cortes, 1996;Drucker, Cortes, Jackel, LeCun, & Vapnik, 1994;Opitz & Maclin, 1999;Schapire, Freund, Bartlett, & Lee, 1997). Even though there are several versions of the boosting algorithms (Friedman, Hastie, & Tibshirani, 2000), the most widely used is the one by Freund and Schapire (1996) which is known as AdaBoost.…”
Section: Boostingmentioning
confidence: 97%
“…Once the process has finished, the single classifiers obtained are combined into a final, highly accurate classifier in the training set. The final classifier therefore usually achieves a high degree of accuracy in the test set as various authors have shown both theoretically and empirically [5][6][7][8][9][10][11][12][13][14][15].…”
Section: Boostingmentioning
confidence: 99%