The 2006 IEEE International Joint Conference on Neural Network Proceedings 2006
DOI: 10.1109/ijcnn.2006.247331
|View full text |Cite
|
Sign up to set email alerts
|

An Evaluation of Over-Fit Control Strategies for Multi-Objective Evolutionary Optimization

Abstract: The optimization of classification systems is often confronted by the solution over-fit problem. Solution over-fit occurs when the optimized classifier memorizes the training data sets instead of producing a general model. This paper compares two validation strategies used to control the over-fit phenomenon in classifier optimization problems. Both strategies are implemented within the multi-objective NSGA-II and MOMA algorithms to optimize a Projection Distance classifier and a Multiple Layer Perceptron neura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0
1

Year Published

2008
2008
2017
2017

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(21 citation statements)
references
References 16 publications
0
20
0
1
Order By: Relevance
“…The global validation strategy detailed in [10] selected better results in S than simply selecting the record solution r obtained at the end of the optimization process. This reinforces the conclusion in [10] that the optimization of classification systems using wrapped classifiers is prone to solution over-fit.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The global validation strategy detailed in [10] selected better results in S than simply selecting the record solution r obtained at the end of the optimization process. This reinforces the conclusion in [10] that the optimization of classification systems using wrapped classifiers is prone to solution over-fit.…”
Section: Discussionmentioning
confidence: 99%
“…The RRT algorithm, detailed in Algorithm 1, produces after a number of iterations the record solution r. The algorithm is similar to a hill climbing approach, but avoids local optimum solutions by allowing the search towards non-optimal solutions with a fixed deviation D. Earlier experiments indicated that the RRT algorithm over-fitted solutions during the optimization process. The global validation strategy discussed in [10] is used to avoid this effect, and Algorithm 1 includes support for this strategy. Given the initial solution i, the algorithm will copy it to the record solution r and store its evaluation value in RECORD.…”
Section: Optimization Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…There are three common methods for controlling overfitting in optimization systems [61]: Partial Validation (PV), Backwarding Validation (BV), and Global Validation (GV). In this work, we use the GV approach since previous works in the literature demonstrate that the GV is a more robust alternative for controlling overfitting in optimization techniques [49,61]. In the GV scheme (see Algorithm 1), at each generation, the fitness of all particles S g i ∈ S are evaluated using the validation set, DSEL * (line 18 of the algorithm).…”
Section: Overfitting Control Schemementioning
confidence: 99%
“…Considering other methods applied on the same isolated digit dataset, we can find results varying from 99.16% to 99.37% [6,[21][22][23], using classifiers such as MLP and SVM, and ensembles of MLP as well, in a BL setting. In spite of HMMs performing worse than other classifiers in this task, some recent research with EoHMMs demonstrated that HMM-based classifiers, with improved codebooks, the recognition rates can be increased from 98.00% [8] to 98.86% [24].…”
Section: Experimental Evaluationmentioning
confidence: 99%