2022
DOI: 10.1016/j.ins.2022.01.010
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian optimization based dynamic ensemble for time series forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(23 citation statements)
references
References 37 publications
0
16
0
Order By: Relevance
“…Hyperparameter tuning or optimization is a robust method of identifying and finding the best feasible values of hyperparameters for a machine learning model to attain the desired resultant modeling outcome. Popular hyperparameter tuning algorithms in the literature include random search, grid search, and Bayesian optimization search [ 36 , 37 ].…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…Hyperparameter tuning or optimization is a robust method of identifying and finding the best feasible values of hyperparameters for a machine learning model to attain the desired resultant modeling outcome. Popular hyperparameter tuning algorithms in the literature include random search, grid search, and Bayesian optimization search [ 36 , 37 ].…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…Various ways of combining the weak learners can be used to create a model that has greater capacity, such as bagging, boosting, and stacking [52]. Further optimized models, such as the bayesian optimization-based dynamic ensemble proposed by Du et al [53] can be used, and are even applied with nonlinear data [54].…”
Section: Related Workmentioning
confidence: 99%
“…The acquisition function measures the utility of candidate points for the next evaluation by trading off the exploration of uncertain areas and exploiting promising regions. BO has been extensively used in many scenarios, including hyperparameter tuning [9,14,31,42,68], experimental design [22] and controller tuning [11,20,21,46]. Contextual BO considers the environmental conditions by augmenting the GP kernel with extra context variables and uses đ¶đș𝑃 − đ‘ˆđ¶đ” to select promising action [35].…”
Section: Related Workmentioning
confidence: 99%