Third IEEE International Conference on Data Mining
DOI: 10.1109/icdm.2003.1250970
|View full text |Cite
|
Sign up to set email alerts
|

Comparing pure parallel ensemble creation techniques against bagging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…Bagging is more accurate than variations of random forests, randomized C4.5 [13] and random subspaces [14]. For data sets with many conflicting cases bagging is much better than boosting [15].…”
Section: Introductionmentioning
confidence: 99%
“…Bagging is more accurate than variations of random forests, randomized C4.5 [13] and random subspaces [14]. For data sets with many conflicting cases bagging is much better than boosting [15].…”
Section: Introductionmentioning
confidence: 99%
“…Representative parallel ensemble approaches include Bagging [4] and Random Forests [5]; and examples of sequential ensemble approaches include AdaBoost [8] and Stochastic Gradient Boosting [9]. However, despite the emergence of many ensemble approaches, it has been shown that none is significantly superior to another over a range of data sets [10].…”
Section: Introductionmentioning
confidence: 99%
“…Some examples for ensemble techniques are bagging, boosting, and random forests. 31 Usually ensemble techniques are not used for strong classifiers like support vector machines. However, the question arises whether ensemble techniques are able to improve results of such classifiers even more.…”
Section: New Kernels For Support Vector Machinesmentioning
confidence: 99%
“…[1][2][3]18 It has been discovered that a combination of different classifications into a consensus model can improve the overall results of the classification. The basic algorithms behind were decision treemethodslikerecursivepartitioning,ensemblemethods 22,23,[29][30][31][32] like adaptive boosting, 33 and support vector machines. 34 However, the limiting but most important part is the access to validated data.…”
Section: Introductionmentioning
confidence: 99%