2021
DOI: 10.1016/j.compag.2021.106039
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of stacking and blending ensemble learning methods for estimating daily reference evapotranspiration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 97 publications
(60 citation statements)
references
References 58 publications
0
60
0
Order By: Relevance
“…As this case study belongs to the regression category, statistical performance metrics such as mean absolute error (MAE), root mean square error (RMSE), Nash–Sutcliffe efficiency coefficient (NSE), and coefficient of determination (R 2 ) between actual and predicted ETo were used to analyse the performance of the proposed DNN model and other baseline machine learning models of this study. RMSE, the goodness of the fit metric, is the standard deviation of the discrepancy between the predicted and the actual values (Yaseen et al 2018 ; Wu et al 2021 ). As analogous to RMSE, the goodness of fit measure that does not govern the direction of the value is considered as MAE (Yaseen et al 2018 ; Wu et al 2021 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As this case study belongs to the regression category, statistical performance metrics such as mean absolute error (MAE), root mean square error (RMSE), Nash–Sutcliffe efficiency coefficient (NSE), and coefficient of determination (R 2 ) between actual and predicted ETo were used to analyse the performance of the proposed DNN model and other baseline machine learning models of this study. RMSE, the goodness of the fit metric, is the standard deviation of the discrepancy between the predicted and the actual values (Yaseen et al 2018 ; Wu et al 2021 ). As analogous to RMSE, the goodness of fit measure that does not govern the direction of the value is considered as MAE (Yaseen et al 2018 ; Wu et al 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…The tree-based ETo estimation models, such as RF (Feng et al 2017a ; Wang et al 2019 ; Karimi et al 2020 ), gradient boosting decision tree (Ponraj and Vigneswaran 2020 ), XGBoost (Fan et al 2018 ; Wu and Fan 2019 ), and light gradient boosting machine (Fan et al 2019 ) were observed to provide more stable results, faster predictions, management of large datasets, and also the capability to prevent over-fitting, in comparison to other soft computing techniques. Recently, Wu et al ( 2021 ) unveiled stacking and blending ensemble ETo models and demonstrated their supremacy over basic machine learning and empirical models in prediction precision, stability, portability, and computing cost under complete and minimal input scenarios. While the strength of the ensemble and boosting techniques have given rise to these models and highlighted their state-of-the-art impacts in various studies, their applications in the ETo modelling are very abundant in the literature.…”
Section: Introductionmentioning
confidence: 99%
“…Many weak learner instances of the algorithm are being pooled (via boosting, bagging, etc.) together to create a strong ensemble learner, with some success [55,56]. Thus, researchers need to pay more attention to integration learning.…”
Section: Performance Of the Modelsmentioning
confidence: 99%
“…In this study, we consider two ensemble methods, blending and stacking. The idea of blending [81] is to combine different ML algorithms and use a majority vote,…”
Section: ) Ensemble Learningmentioning
confidence: 99%
“…or the average predicted probabilities in case of classification to predict the final outcome. On the other hand, the stacking method [81], [82] treats the outputs of the base estimators as input to train a second-level model on top of a selected meta estimator. Ensemble learning usually demonstrates superior performance than single models, because the strategy of model aggregation has the potential of finding more distinguishable patterns, which may not be seen by a single model.…”
Section: ) Ensemble Learningmentioning
confidence: 99%