2004
DOI: 10.1023/b:mach.0000015881.36452.6e
|View full text |Cite
|
Sign up to set email alerts
|

Is Combining Classifiers with Stacking Better than Selecting the Best One?

Abstract: Abstract. We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

8
350
0
9

Year Published

2005
2005
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 706 publications
(370 citation statements)
references
References 20 publications
8
350
0
9
Order By: Relevance
“…The problem with these rules is that is if difficult to estimate different weights for the scores belonging to each class. Therefore, in our work we stack the output of the different classifiers into a single vector and the train a Support Vector Machine (SVM) to make a combined decision [8]. …”
Section: B Fusionmentioning
confidence: 99%
“…The problem with these rules is that is if difficult to estimate different weights for the scores belonging to each class. Therefore, in our work we stack the output of the different classifiers into a single vector and the train a Support Vector Machine (SVM) to make a combined decision [8]. …”
Section: B Fusionmentioning
confidence: 99%
“…A Model Tree predictor was chosen as a metaregressor not only because it achieved good results in the initial experiments, but also because it is a state-of-the-art regression method and it has already been successfully used as a meta-classifier for stacking (Dzeroski & Zenko, 2004), outperforming all the other combining methods tested. In order to determine which subset of algorithms can provide the best ensemble, we built four models by stacking: one containing the square-root SPKFs and EKF, and the others leaving one of them out.…”
Section: Stacking Of Sigma-point Kalman Filtersmentioning
confidence: 99%
“…For example [13] generate ensembles of heterogeneous classifiers using stacking. [11] proposed a framework for generating hundreds of thousands of classifiers in parallel in a distributed environment using small subsamples of the dataset.…”
Section: Introductionmentioning
confidence: 99%