2017 European Modelling Symposium (EMS) 2017
DOI: 10.1109/ems.2017.13
|View full text |Cite
|
Sign up to set email alerts
|

Predictive Ensemble Modelling: Experimental Comparison of Boosting Implementation Methods

Abstract: -This paper presents the empirical comparison of boosting implementation by reweighting and resampling methods. The goal of this paper is to determine which of the two methods performs better. In the study, we used four algorithms namely: Decision Stump, Neural Network, Random Forest and Support Vector Machine as base classifiers and AdaBoost as a technique to develop various ensemble models. We applied 10-fold cross validation method in measuring and evaluating the performance metrics of the models. The resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…In Bagging, the combination of weak classifiers is used to improve the classification accuracy. [13] The idea of bagging is to fit several independent models and "average" their predictions in order to obtain a model with a lower variance. [5] Bagging can be applied to any type of machine learning algorithm, but it is particularly effective with decision trees, which are known to have high variance.…”
Section: Baggingmentioning
confidence: 99%
“…In Bagging, the combination of weak classifiers is used to improve the classification accuracy. [13] The idea of bagging is to fit several independent models and "average" their predictions in order to obtain a model with a lower variance. [5] Bagging can be applied to any type of machine learning algorithm, but it is particularly effective with decision trees, which are known to have high variance.…”
Section: Baggingmentioning
confidence: 99%