2000
DOI: 10.1007/3-540-45372-5_6
|View full text |Cite
|
Sign up to set email alerts
|

Combining Multiple Models with Meta Decision Trees

Abstract: The paper introduces meta decision trees (MDTs), a novel method for combining multiple classifiers. Instead of giving a prediction, MDT leaves specify which classifier should be used to obtain a prediction. We present an algorithm for learning MDTs based on the C4.5 algorithm for learning ordinary decision trees (ODTs). An extensive experimental evaluation of the new algorithm is performed on twenty-one data sets, combining classifiers generated by five learning algorithms: two algorithms for learning decision… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
98
1
6

Year Published

2004
2004
2021
2021

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 109 publications
(105 citation statements)
references
References 8 publications
(10 reference statements)
0
98
1
6
Order By: Relevance
“…According to [39], the entropy of the probability distribution reflects the certainty of the classifier in the predicted class value. The entropy of the probability distribution is computed as:…”
Section: Active Learning: Uncertainty Methods Bmentioning
confidence: 99%
“…According to [39], the entropy of the probability distribution reflects the certainty of the classifier in the predicted class value. The entropy of the probability distribution is computed as:…”
Section: Active Learning: Uncertainty Methods Bmentioning
confidence: 99%
“…In [7], a variant of stacking using the Multi-response Logistic Regression (MLR) for the meta classifier is proposed and it makes full use of the class distributions from the base classifiers to gain a good ensemble. In [8], the stacking using a modified decision tree as the meta classifier is proposed. In [9], StackingC, a variant of stacking with MLR, is proposed for multi-class classification.…”
Section: Related Workmentioning
confidence: 99%
“…Exhaustive search, with no doubt, would cause a large cost of time and is impractical. In the earlier years, researchers used fixed configurations for classification [7,8,9,10]. But the results were ragged over multiple datasets because the configuration selection is domain specific.…”
Section: Introductionmentioning
confidence: 99%
“…One of the approaches to meta-learning develops methods of decision committee construction and various stacking strategies, also performing nontrivial analysis of member models to draw committee conclusions (Chan and Stolfo, 1996;Prodromidis and Chan, 2000;Todorovski and Dzeroski, 2003;Duch and Itert, 2003;Jankowski and Grąbczewski, 2005;Troć and Unold, 2010). Another group of meta-learning enterprises (Pfahringer et al, 2000;Brazdil et al, 2003;Bensusan et al, 2000;Peng et al, 2002) is based on data characterization techniques (characteristics of data like the number of features/vectors/classes, feature variances, information measures on features, also from decision trees, etc.)…”
Section: Introductionmentioning
confidence: 99%