2020
DOI: 10.1007/s13762-019-02619-6
|View full text |Cite|
|
Sign up to set email alerts
|

Accuracy enhancement for monthly evaporation predicting model utilizing evolutionary machine learning methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 33 publications
1
10
0
Order By: Relevance
“…Because of the small data sets used, researchers often perform feature selection/reduction to avoid overfitting. Most often, the selected features in the literature are combinations of features derived from previous time steps in the data, for example, a parameter at month n may be predicted based one or more parameter values taken from months previous to n [ [25][26][27]46,63].…”
Section: Literature Review and Scope Of The Researchmentioning
confidence: 99%
See 1 more Smart Citation
“…Because of the small data sets used, researchers often perform feature selection/reduction to avoid overfitting. Most often, the selected features in the literature are combinations of features derived from previous time steps in the data, for example, a parameter at month n may be predicted based one or more parameter values taken from months previous to n [ [25][26][27]46,63].…”
Section: Literature Review and Scope Of The Researchmentioning
confidence: 99%
“…Some papers use MLR based on previous lags [13,16,17,25], while the authors in [45] used same-month averages. Some papers do not use simple baselines, but rather compare several variations or architectures of more advanced ML methods such as SVR or MLP [26,27,46,63]. In summary, simple baselines are not consistently used in the literature.…”
Section: Literature Review and Scope Of The Researchmentioning
confidence: 99%
“…It is necessary to consider two technical aspects in order to integrate the optimization algorithms with MLP, namely, the method for encoding the agents/solutions and the procedure for determining the objective function. Although the standalone MLP models have high ability, their training algorithms may have slow convergence or may trap in local optimums [21][22][23][24][25][26][27][28][29][30][31][32][33][34]. erefore, it is essential to improve the accuracy of the MLP models.…”
Section: Optimization Algorithms For Training Mlpsmentioning
confidence: 99%
“…Optimization algorithms are considered as suitable alternatives for traditional training algorithms due to their advanced operators, which avoid trapping in local optimums. e optimization algorithms are widely used for training soft computing models [19][20][21][22][23][24][25][26]. e genetic algorithm (GA) and particle swarm optimizations (PSO) are powerful optimization algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…In comparison to conventional NNs and linear regression models, the applied quantile regression forest provided better results. Mohammadi et al [31] predicted monthly EP using integrative ANFIS, MLP and RBNN models for two stations in Iran. The results showed that the integrative ANFIS model acted better compared to the MLP and RBNN.…”
Section: Introductionmentioning
confidence: 99%