2021
DOI: 10.4236/ojmh.2021.111001
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Model Structural Complexity and Data Pre-Processing on Artificial Neural Network (ANN) Forecast Performance for Hydrological Process Modelling

Abstract: The choice of a particular Artificial Neural Network (ANN) structure is a seemingly difficult task; worthy of relevance is that there is no systematic way for establishing a suitable architecture. In view of this, the study looked at the effects of ANN structural complexity and data pre-processing regime on its forecast performance. To address this aim, two ANN structural configurations: 1) Single-hidden layer, and 2) Double-hidden layer feed-forward back propagation network were employed. Results obtained rev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…It was aimed to provide the architecture with more data training so that it is easier to recognize data patterns and trends. Some studies that used architecture with two layers are studies on hydrological data prediction [32], inflation data prediction [33], and single-shaft gas turbine prediction [34]. Asgari et al [34] also mentioned that the optimal model of the data training process was obtained using trainlm as its training function, as well as tansig and logsig as its transfer function for hidden layers and outputs.…”
Section: Introductionmentioning
confidence: 99%
“…It was aimed to provide the architecture with more data training so that it is easier to recognize data patterns and trends. Some studies that used architecture with two layers are studies on hydrological data prediction [32], inflation data prediction [33], and single-shaft gas turbine prediction [34]. Asgari et al [34] also mentioned that the optimal model of the data training process was obtained using trainlm as its training function, as well as tansig and logsig as its transfer function for hidden layers and outputs.…”
Section: Introductionmentioning
confidence: 99%