2014 22nd International Conference on Pattern Recognition 2014
DOI: 10.1109/icpr.2014.221
|View full text |Cite
|
Sign up to set email alerts
|

On Meta-learning for Dynamic Ensemble Selection

Abstract: In this paper, we propose a novel dynamic ensemble selection framework using meta-learning. The framework is divided into three steps. In the first step, the pool of classifiers is generated from the training data. The second phase is responsible to extract the meta-features and train the metaclassifier. Five distinct sets of meta-features are proposed, each one corresponding to a different criterion to measure the level of competence of a classifier for the classification of a given query sample. The meta-fea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 25 publications
(40 citation statements)
references
References 32 publications
0
40
0
Order By: Relevance
“…However, in this paper, we compare the results provided in the extensive analysis made in [25] 4. META-DES (MDES): The META-DES method is a dynamic selection ensemble method based on the assumption that the dynamic ensemble selection problem can be considered as a metaproblem [21]. This meta-problem uses different criteria regarding the behavior of a base classifier c i , in order to decide whether it is competent enough to classify a given test sample.…”
Section: Comparative Analysis: Ensemble Generation Methodsmentioning
confidence: 99%
“…However, in this paper, we compare the results provided in the extensive analysis made in [25] 4. META-DES (MDES): The META-DES method is a dynamic selection ensemble method based on the assumption that the dynamic ensemble selection problem can be considered as a metaproblem [21]. This meta-problem uses different criteria regarding the behavior of a base classifier c i , in order to decide whether it is competent enough to classify a given test sample.…”
Section: Comparative Analysis: Ensemble Generation Methodsmentioning
confidence: 99%
“…In the dynamic selection stage, DFP chooses the classifiers that were preprocessed by the pruning stage.  Margin Distance Minimization (MDM) pruning: The margin distance [37] defines the quantity c between outputs of ensemble classifiers and labels. The Quantity c equals to 1 if the label is correctly classified.…”
Section: Comparison To Other Pruning Methodsmentioning
confidence: 99%
“…For each cluster, the most accurate classifiers are picked. To subsequently create the ensemble, the most accurate classifiers are selected from the diverse classifiers . In this method, first, the k‐means algorithm divides validation patterns into k groups.…”
Section: Background Materialsmentioning
confidence: 99%