Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation 2008
DOI: 10.1145/1389095.1389229
|View full text |Cite
|
Sign up to set email alerts
|

Pareto analysis for the selection of classifier ensembles

Abstract: The overproduce-and-choose strategy involves the generation of an initial large pool of candidate classifiers and it is intended to test different candidate ensembles in order to select the best performing solution. The ensemble's error rate, ensemble size and diversity measures are the most frequent search criteria employed to guide this selection. By applying the error rate, we may accomplish the main objective in Pattern Recognition and Machine Learning, which is to find high-performance predictors. In term… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 14 publications
(24 reference statements)
0
9
0
Order By: Relevance
“…Several search algorithms have been applied in the literature for classifier selection, ranging from ranking the n best classifiers [19] to genetic algorithms (GAs) [28,33]. Ensemble combination performance [27], diversity measures [33,1,30] and ensemble size [23] are search criteria which are often employed. GAs are attractive since they allow the fairly easy implementation of classifier selection tasks as optimization processes [32].…”
Section: Introductionmentioning
confidence: 99%
“…Several search algorithms have been applied in the literature for classifier selection, ranging from ranking the n best classifiers [19] to genetic algorithms (GAs) [28,33]. Ensemble combination performance [27], diversity measures [33,1,30] and ensemble size [23] are search criteria which are often employed. GAs are attractive since they allow the fairly easy implementation of classifier selection tasks as optimization processes [32].…”
Section: Introductionmentioning
confidence: 99%
“…Complexity metrics evaluate how complex the classifiers in the ensemble, or the ensemble as a whole, are. The most popular complexity metrics are the number of activated classifiers (for classifier selection) (Park & Cho, 2003b;Ishibuchi & Yamamoto, 2003;Dos Santos et al, 2008b;Trawiński et al, 2013;Trawiński et al, 2014) and the number of attributes used by the models induced by the base learners (Chen & Yao, 2006;Aliakbarian & Fanian, 2013;Tan et al, 2014;Chen et al, 2014;Zagorecki 2014;Rapakoulia et al, 2014;Sikdar et al, 2015;Winkler et al, 2015). Other complexity metrics include the number of nodes in flexible neural trees (Ojha et al, 2017); the number of hidden neurons in a neural network (Connolly et al, 2013;; the structured minimization principle (Garg & Lam, 2015); the number of support vectors in a SVM model (Rapakoulia et al, 2014); and the length of fuzzy rules (Ishibuchi & Yamamoto, 2003).…”
Section: Effectiveness Diversity Complexity and Efficiencymentioning
confidence: 99%
“…By starting with only two classifiers (Neural Networks), the number of ensemble members is increased by adding classifiers that reduce the overall ensemble’s error rate. In Dos Santos et al (2008b), the authors investigate the impact of combining error rate (effectiveness), ensemble size (efficiency), and 12 diversity measures on the quality of static selection by using pairs of objectives. The authors also study conflicts between objectives, such as error rate/diversity measures and ensemble size/diversity measures.…”
Section: The Selection Stage Of Ensemble Learningmentioning
confidence: 99%
“…Tremblay et al [19] employed a multi-objective genetic algorithm guided by objective functions composed of the error rate with four different diversity measures. Dos Santos et al [20] optimize a combination of ensemble error rate and diversity measures too, but they also include the ensemble size as a criterion in their analysis of the Pareto front.…”
Section: A Related Workmentioning
confidence: 99%