2014
DOI: 10.1007/978-3-319-09339-0_6
|View full text |Cite
|
Sign up to set email alerts
|

Combining Multi Classifiers Based on a Genetic Algorithm – A Gaussian Mixture Model Framework

Abstract: Combining outputs from different classifiers to achieve high accuracy in classification task is one of the most active research areas in ensemble method. Although many state-of-art approaches have been introduced, no method is outstanding compared with the others on numerous data sources. With the aim of introducing an effective classification model, we propose a Gaussian Mixture Model (GMM) based method that combines outputs of base classifiers (called meta-data or Level1 data) resulted from Stacking Algorith… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 20 publications
(10 reference statements)
0
9
0
Order By: Relevance
“…Decision Template. The classification accuracy of our proposed ensemble system can be further improved by applying classifier and feature selection on the ensemble system as in [27][28][29][30].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Decision Template. The classification accuracy of our proposed ensemble system can be further improved by applying classifier and feature selection on the ensemble system as in [27][28][29][30].…”
Section: Discussionmentioning
confidence: 99%
“…The outputs of these classifiers are then combined to produce the final decision. Several state-of-the-art ensemble methods in this category include AdaBoost [20], Bagging [21], Random Forest [22], and Random Subspace [23].• Different classifiers (also called Heterogeneity scenario [19]): A set of different learning algorithms is used on the same training dataset to generate different base classifiers, a combiner then make decision from the outputs (called Level1 data or meta-data) of these classifiers [24][25][26][27][28][29][30]. This approach focuses more on the algorithms to combine meta-data to achieve higher accuracy than any single base classifier.…”
mentioning
confidence: 99%
“…Moreover, the estimate of the joint distribution is obtained by aggregating multiple models associated with each features. Since many research [52][53][54][55][56][57][58][59][60] have shown that aggregating multiple models can improve the classification accuracy, the 1dependence method is expected to enhance the performance of the system.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…A N U S C R I P T 27 multiple models in an ensemble system can usually improve the classification accuracy [52][53][54][55][56][57][58][59][60].…”
Section: A C C E P T E D Mmentioning
confidence: 99%
“…This is known as feature selection for the ensemble [5]. Similar approaches can also be found in the selection of metaclassifier [6] and meta-data [7,8] for the ensemble system. Finally, we usually use pre-selected parameters for the learning algorithm and the combining algorithm when training the base classifiers and the meta-classifier.…”
Section: Introductionmentioning
confidence: 98%