The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033350
|View full text |Cite
|
Sign up to set email alerts
|

A method for dynamic ensemble selection based on a filter and an adaptive distance to improve the quality of the regions of competence

Abstract: Dynamic classifier selection systems aim to select a group of classifiers that is most adequate for a specific query pattern. This is done by defining a region around the query pattern and analyzing the competence of the classifiers in this region. However, the regions are often surrounded by noise which can difficult the classifier selection. This fact makes the performance of most dynamic selection systems no better than static selections. In this paper we demonstrate that the performance of dynamic selectio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 27 publications
(35 citation statements)
references
References 30 publications
(31 reference statements)
0
35
0
Order By: Relevance
“…For each dataset, the experiments were conducted using 20 replications. For each replication, the datasets were divided using the holdout method [71] on the basis of 50% for training, 25% for the dynamic selection dataset, DSEL, and 25% for the test set, G. They were selected empirically based on previous publications [46,18,6]. Hence, the size of the meta-feature vector is 67 ((7 × 8) +5 + 6).…”
Section: Experimental Protocolmentioning
confidence: 99%
See 1 more Smart Citation
“…For each dataset, the experiments were conducted using 20 replications. For each replication, the datasets were divided using the holdout method [71] on the basis of 50% for training, 25% for the dynamic selection dataset, DSEL, and 25% for the test set, G. They were selected empirically based on previous publications [46,18,6]. Hence, the size of the meta-feature vector is 67 ((7 × 8) +5 + 6).…”
Section: Experimental Protocolmentioning
confidence: 99%
“…The same pool of classifiers is used for all techniques in order to ensure a fair comparison. For all techniques, the size of the region of competence, K, was set at 7 since it achieved the best result in previous experiments [46,6]. The results are shown in As shown in Figure 6, the selected meta-features vary considerably according to different classification problems.…”
Section: Comparison With the State-of-the-art Des Techniquesmentioning
confidence: 99%
“…NNF) works as a noise r class boundaries [25]. et ‫ܦ‬ (data set where the d) generating the data set hm 2.…”
Section: B Nearest Neighbor Filtermentioning
confidence: 99%
“…Then, the selected classifier or EoC is used for the classification of all unseen test samples. In contrast, dynamic ensemble selection approaches (DES) [4,5,6,7,8,9,10,11,12,13] select a different classifier or a different EoC for each new test sample. DES techniques rely on the assumption that each base classifier is an expert in a different local region of the feature space [14].…”
Section: Introductionmentioning
confidence: 99%
“…Most DES techniques [4,12,11,10,15,16,17,18] use estimates of the classifiers' local accuracy in small regions of the feature space surrounding the query instance as a search criterion to perform the ensemble selection. However, in our previous work [10], we demonstrated that the use of local accuracy estimates alone is insufficient to achieve results close to the Oracle performance. The Oracle is an abstract model defined in [19] which always selects the classifier that predicted the correct label, for the given query sample, if such classifier exists.…”
Section: Introductionmentioning
confidence: 99%