2020
DOI: 10.34028/iajit/17/4/10
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced Bagging (eBagging): A Novel Approach for Ensemble Learning

Abstract: Bagging is one of the well-known ensemble learning methods, which combines several classifiers trained on different subsamples of the dataset. However, a drawback of bagging is its random selection, where the classification performance depends on chance to choose a suitable subset of training objects. This paper proposes a novel modified version of bagging, named enhanced Bagging (eBagging), which uses a new mechanism (error-based bootstrapping) when constructing training sets in order to cope with this proble… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 31 publications
(22 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…In ensemble learning prediction, instead of using one algorithm to build the forecasting model, multiple learning algorithms are needed to train its base models ( Wang and Srinivasan, 2017 ). The commonly used ensemble techniques are bagging ( Tuysuzoglu and Birant, 2020 ), boosting ( Kadkhodaei et al, 2020 ), voting ( Tsai, 2019 ), and stacking ( Mahendran et al, 2020 ). Among them, bagging and boosting are two of the most widely-used ensemble learning methods because of their theoretical superiority and strong experimental performance ( Oza, 2005 ), with Random Forest (RF) and Extreme Gradient Boosting (XGBoost) as the representative algorithms for each, respectively.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In ensemble learning prediction, instead of using one algorithm to build the forecasting model, multiple learning algorithms are needed to train its base models ( Wang and Srinivasan, 2017 ). The commonly used ensemble techniques are bagging ( Tuysuzoglu and Birant, 2020 ), boosting ( Kadkhodaei et al, 2020 ), voting ( Tsai, 2019 ), and stacking ( Mahendran et al, 2020 ). Among them, bagging and boosting are two of the most widely-used ensemble learning methods because of their theoretical superiority and strong experimental performance ( Oza, 2005 ), with Random Forest (RF) and Extreme Gradient Boosting (XGBoost) as the representative algorithms for each, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Bagging is the most famous representative of ensemble learning. The core idea of bagging is to obtain an aggregated predictor by using a combination rule ( Tuysuzoglu and Birant, 2020 ). In bagging, given a data set containing m samples, one sample is randomly picked into the sampling set for processing and then put back into the original dataset, so that the sample still has the possibility to be selected in the next sampling round.…”
Section: Introductionmentioning
confidence: 99%
“…The disadvantage with the traditional bootstrap method is that training subsets produced by random selection with replacement are not especially concentrated on misclassified instances. Tuysuzoglu and Birant 2020 [34] propose a novel modified version of bagging, named enhanced bagging (eBagging), which uses a new bootstrapping method, referred to as prediction errorbased bootstrapping (eBootstrapping).…”
Section: Introductionmentioning
confidence: 99%
“…Boosting is an iterative approach combining various weak learners, which results in low training error [33], [35]. It 1 www.ufjf.br/pgcc/dissertacoes/ is performed by sequentially updating selected instances to the ensemble subspace by giving more weight to difficult examples, i.e., the most informative instances, which are not correctly classified in the previous steps Tuysuzoglu and Birant 2020 [34]. Weighted majority voting is applied as the combination rule for the ensemble outputs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation