2nd International Conference on Data, Engineering and Applications (IDEA) 2020
DOI: 10.1109/idea49133.2020.9170675
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Learning Techniques and its Efficiency in Machine Learning: A Survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(26 citation statements)
references
References 24 publications
0
26
0
Order By: Relevance
“…The boosting method, which changes weak learners into strong learners by increasing the number of iterations, has a strong theoretical foundation and algorithm characteristics [20,21] . The timing of the method entering the next round of training is determined by the accuracy of classification.…”
Section: Integrated Learning Methodsmentioning
confidence: 99%
“…The boosting method, which changes weak learners into strong learners by increasing the number of iterations, has a strong theoretical foundation and algorithm characteristics [20,21] . The timing of the method entering the next round of training is determined by the accuracy of classification.…”
Section: Integrated Learning Methodsmentioning
confidence: 99%
“…After that, each division may be used to train a distinct classifier, which can then be merged using a suitable combination algorithm. If there is not enough data, bootstrapping may be used to train alternative classifiers using distinct bootstrap samples of the data, each of which is a random sample of the data taken with replacement and handled as if it were drawn independently from the underlying distribution [48].…”
Section: Machine Learning Modelsmentioning
confidence: 99%
“…It builds classifiers and combines their output to reduce variances. By mixing the classifiers, ensemble learning improves the accuracy of the task, in comparison to only one classifier [61]. Improving predictive uncertainty evaluation is a difficult task and an ensemble of models can help in this challenge [62].…”
Section: Ensemble Learning Approachesmentioning
confidence: 99%
“…Improving predictive uncertainty evaluation is a difficult task and an ensemble of models can help in this challenge [62]. There are several ensembles approaches, suitable for specific tasks: Dynamic Selection, Sampling Methods, Cost-Sensitive Scheme, Patch-Ensemble Classification, Bagging, Boosting, Adaboost, Random Forest, Random Subspace, Gradient Boosting Machine, Rotation Forest, Deep Neural Decision Forests, Bounding Box Voting, Voting Methods, Mixture of Experts, and Basic Ensemble [61,[63][64][65][66][67][68].…”
Section: Ensemble Learning Approachesmentioning
confidence: 99%