2021
DOI: 10.1016/j.ins.2021.08.085
|View full text |Cite
|
Sign up to set email alerts
|

Improving the performance of bagging ensembles for data streams through mini-batching

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Since ensemble models with low correlations are preferred in these predictions, the sampling with replacement method allows more difference in the training dataset and, in turn, results in greater differences between the predictions of the base learners. It is worth mentioning that the bagging process, depending on its number of iterations or combination with time series, could be computationally demanding to fit, as explained in [97]. Figure 5 shows a pseudo-code for a bagging NN ensemble algorithm; note that this is a simple example, and the actual implementation of bagging in neural networks may vary depending on each specific case and library.…”
Section: Ensemble Generation Methodsmentioning
confidence: 99%
“…Since ensemble models with low correlations are preferred in these predictions, the sampling with replacement method allows more difference in the training dataset and, in turn, results in greater differences between the predictions of the base learners. It is worth mentioning that the bagging process, depending on its number of iterations or combination with time series, could be computationally demanding to fit, as explained in [97]. Figure 5 shows a pseudo-code for a bagging NN ensemble algorithm; note that this is a simple example, and the actual implementation of bagging in neural networks may vary depending on each specific case and library.…”
Section: Ensemble Generation Methodsmentioning
confidence: 99%
“…Ensemble modeling can be based on any of the three hypotheses namely bagging, boosting, and stacking. A bagging method lowers the model's variance (Cassales et al, 2021). A boosting method decreases the model's bias (Trizoglou et al, 2021), and stacking enhances the model's predictive power with fine accuracy (Radhakrishnan et al, 2021).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Algorithm 1 process minibatch (...) // As proposed in [Cassales et al 2021] 1: Input: mini-batch B 2: for each trainer T i in trainers T do in parallel ▷ The classification loop 3:…”
Section: Improvement Mini-batching With Loop Fusion For Energy Savingmentioning
confidence: 99%