2016 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC) 2016
DOI: 10.1109/csiec.2016.7482130
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble classifiers with improved overfitting

Abstract: Overfitting has been always considered as a challenging problem in designing and training of ensemble classifiers. Obviously, the use of complex multiple classifiers may increase the success of ensemble classifier in feature space division with intertwined data and also may decrease the training error to minimum value. However, this success does not exist on the test data. Ensemble classifiers are more prone to overfitting than single classifiers because ensemble classifiers have been formed of several base cl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 13 publications
(17 reference statements)
0
15
0
Order By: Relevance
“…The grid-search method is also used to determine parameters. Secondly, cross validation is used to prevent model overfitting [24]. The K-fold cross validation method is used in the study [25].…”
Section: Random Forest Methodsmentioning
confidence: 99%
“…The grid-search method is also used to determine parameters. Secondly, cross validation is used to prevent model overfitting [24]. The K-fold cross validation method is used in the study [25].…”
Section: Random Forest Methodsmentioning
confidence: 99%
“…Overfitting is another problem caused when the model learns too much from the training dataset, which makes it unable to produce the desired performance and results on the test set. This is also solved if features are selected based on their relevance and contribution to improving the performance of the system [22], [27]. In this study, we used Ruzzo-Tompa for feature selection, which selects relevant features by following a series of steps that find the optimal subset of features and discard the irrelevant features.…”
Section: A Stacked Genetic Algorithm For Optimizationmentioning
confidence: 99%
“…Another problem is how to find the optimal configuration of the network so that the optimization problem can be resolved. When the model learns too much from the training data, it overfits because it picks even small details from the training data, and when applied to the testing data, the results are not adequate [6], [27]. On the other hand, when the model has not learned enough from the training data, it underfits, and as a result, both the training data and testing data show poor results.…”
Section: Introductionmentioning
confidence: 99%
“…Overfitting is a circumstance where the neural network performs adequately on training data but deficient on generalization [52]. In other words, the network tends to learn the noise and irrelevant properties of training data which results in lower performance when tested with unseen data [53]. Cross-validation, early stopping and regularization are amongst the methods used in preventing overfitting [54].…”
Section: Stabilitymentioning
confidence: 99%