2019
DOI: 10.1007/978-3-030-36708-4_50
|View full text |Cite
|
Sign up to set email alerts
|

Evolving an Optimal Decision Template for Combining Classifiers

Abstract: In this paper, we aim to develop an effective combining algorithm for ensemble learning systems. The Decision Template method, one of the most popular combining algorithms for ensemble systems, does not perform well when working on certain datasets like those having imbalanced data. Moreover, point estimation by computing the average value on the outputs of base classifiers in the Decision Template method is sometimes not a good representation, especially for skewed datasets. Here we propose to search for an o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…in which P k (y m |I(i, j)) is the probability that the pixel I(i, j) belongs to the class label y m given by the classifier generated by using K k for each k = 1, ..., K; m = 1, ..., M and [12], [13]. The prediction for all images in the training set D is given by a…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…in which P k (y m |I(i, j)) is the probability that the pixel I(i, j) belongs to the class label y m given by the classifier generated by using K k for each k = 1, ..., K; m = 1, ..., M and [12], [13]. The prediction for all images in the training set D is given by a…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The next step is to train the combining algorithm on P. There are two combining models developed for the ensemble systems, namely representation-based model and weighted combining-based model [13]…”
Section: Proposed Methodsmentioning
confidence: 99%
“…in which • returns 1 if the condition is true, otherwise returns 0. This loss function is for the classification error rate which is one of the most popular performance metrics in the literature [21]- [23]. The loss on the training set associated with w is given by:…”
Section: Proposed Methods a General Descriptionsmentioning
confidence: 99%
“…SELECTION: We utilize the roulette wheel selection procedure to balance fitness-based criteria and randomness [21]. At each generation, a number of parents are selected based on their fitness values to generate offspring.…”
Section: Optimisation Approachmentioning
confidence: 99%