Proceedings 2001 IEEE International Conference on Data Mining
DOI: 10.1109/icdm.2001.989601
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of stacking with meta decision trees to bagging, boosting, and stacking with other methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(30 citation statements)
references
References 4 publications
0
30
0
Order By: Relevance
“…While conducting this study, the study of Džeroski andŽenko [6], and a few other recent studies [16,13], we have encountered quite a few contradictions between claims in the recent literature on stacking and our experimental results. For example, Merz [8] claims that SCANN is clearly better than the oracle selecting the best classifier (which should perform even better than SelectBest).…”
Section: Conclusion and Further Workmentioning
confidence: 56%
See 1 more Smart Citation
“…While conducting this study, the study of Džeroski andŽenko [6], and a few other recent studies [16,13], we have encountered quite a few contradictions between claims in the recent literature on stacking and our experimental results. For example, Merz [8] claims that SCANN is clearly better than the oracle selecting the best classifier (which should perform even better than SelectBest).…”
Section: Conclusion and Further Workmentioning
confidence: 56%
“…On the other hand, MDTs perform only slightly better than SCANN and selecting the best classifier with cross validation (SelectBest).Ženko et al [16] report that MDTs perform slightly worse as compared to stacking with MLR. Overall, SCANN, MDTs, stacking with MLR and SelectBest seem to perform at about the same level.…”
Section: Recent Advancesmentioning
confidence: 99%
“…Therefore, it is only natural to consider that a method efficiency can be affected by lucky, or unlucky, sample choices. A large number of studies compares empirically Bagging and Boosting with some other classification methods, e.g., [18,15,19,6,22,11,14,10,13,23]. Despite of such an abundant literature about comparison, very few research papers explicit take random choices of samples into account.…”
Section: Introductionmentioning
confidence: 99%
“…Other types of committees-stacking and gradingwere presented in [24,22,3,25] and [21]. In the place of voting or weighting schemes these committees combine models via stacking.…”
Section: Introductionmentioning
confidence: 99%
“…Such knowledge is used to make the final decision of the committee. To approximate model certainty a linear regression [24,22] or meta-decision trees [25] may be used. Other combining schemes were proposed in [14,16].…”
Section: Introductionmentioning
confidence: 99%