2014
DOI: 10.1155/2014/376950
|View full text |Cite
|
Sign up to set email alerts
|

A Weighted Voting Classifier Based on Differential Evolution

Abstract: Ensemble learning is to employ multiple individual classifiers and combine their predictions, which could achieve better performance than a single classifier. Considering that different base classifier gives different contribution to the final classification result, this paper assigns greater weights to the classifiers with better performance and proposes a weighted voting approach based on differential evolution. After optimizing the weights of the base classifiers by differential evolution, the proposed meth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
60
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(60 citation statements)
references
References 18 publications
(25 reference statements)
0
60
0
Order By: Relevance
“…C is a diagonal matrix with diagonal elements C r , 1 r l, which are defined as below (from second order Taylor approximation): C r = γ r (4u 2 r + 8u 3 r + 4u 4 r + β u r + 2β u 3 r + β u 5 r ) 2(u 3 r + u 2 r + u r + 1) 2 (19) where u r = e w r γ r . Here, β is a positive constant, γ r = γ(| w r |+ε) −1 is always positive since γ > 0, and u r is always positive since w r γ r 0.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…C is a diagonal matrix with diagonal elements C r , 1 r l, which are defined as below (from second order Taylor approximation): C r = γ r (4u 2 r + 8u 3 r + 4u 4 r + β u r + 2β u 3 r + β u 5 r ) 2(u 3 r + u 2 r + u r + 1) 2 (19) where u r = e w r γ r . Here, β is a positive constant, γ r = γ(| w r |+ε) −1 is always positive since γ > 0, and u r is always positive since w r γ r 0.…”
Section: Resultsmentioning
confidence: 99%
“…Zhang et al [3] proposed Differential Evolution for finding suitable weights for ensemble base classifiers. Similar to most heuristic solution techniques, they did not explicitly define cost function, but use classification accuracy for fitness function.…”
Section: Related Work: Ensembles That Combine Pre-trained Classifiersmentioning
confidence: 99%
See 1 more Smart Citation
“…The main aim of the ensemble learning is to weigh several individual classifiers and combine the prediction of the multiple classifiers, which outperforms prediction of individual classifiers [31]. The majority voting and weighted majority voting are the most popular combination schemas, which are widely used in ensemble classification [39]. The simple majority voting schema selects one of many alternatives of the predicted classes with the most votes [40].…”
Section: F Ensemble Classifiersmentioning
confidence: 99%
“…We identified the favorable weights of all base classifiers using the approaches presented in Ref. 39. The main steps are given in Table 2.…”
Section: Classification System Constructionmentioning
confidence: 99%