2013
DOI: 10.1017/s0269888913000313
|View full text |Cite
|
Sign up to set email alerts
|

Bagging and boosting variants for handling classifications problems: a survey

Abstract: Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. Since bagging and boosting are an effective and open framework, several researchers have proposed their variants, some of which have turned out to have lower classification error than the original versions. This paper tried to summarize these variants and categorize them into groups. We hope that the references cited cover the major theoretical issues, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(22 citation statements)
references
References 150 publications
0
21
0
1
Order By: Relevance
“…Boosting significantly improves the accuracy of a base-level binary classifier (weak learner) and can learn complex nonlinear decision boundaries. Boosting has been widely and successfully applied to many fields [56][57][58][59]. Inspired by [60], we implement a LogitBoost algorithm which uses adaptive Newton steps to fit an adaptive symmetric logistic model with maximum likelihood.…”
Section: Classificationmentioning
confidence: 99%
“…Boosting significantly improves the accuracy of a base-level binary classifier (weak learner) and can learn complex nonlinear decision boundaries. Boosting has been widely and successfully applied to many fields [56][57][58][59]. Inspired by [60], we implement a LogitBoost algorithm which uses adaptive Newton steps to fit an adaptive symmetric logistic model with maximum likelihood.…”
Section: Classificationmentioning
confidence: 99%
“…Boosting significantly improves the accuracy of a base-level binary classifier (weak learner) and can learn complex non-linear decision boundaries. It has been widely and successfully applied in many fields [26], [27]. Inspired by [28], we implement a LogitBoost algorithm, which provides probability distributions of multi-class problems with decision stumps as the weak learner.…”
Section: Classification Methodsmentioning
confidence: 99%
“…Hence, although the 84 ANN models had different architectures and initial weights (but were trained on the same training data set), the trained models predicted nearly the same NINO3.4 index values. Bootstrap-aggregating (bagging) methods [77] can be used to obtain a larger and more realistic ensemble spread. Another approach to better estimate the predictive uncertainty for neural network models is the so-called Bayesian neural networks (BNN).…”
Section: Prediction Uncertaintymentioning
confidence: 99%