2016 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS) 2016
DOI: 10.1109/i2cacis.2016.7885321
|View full text |Cite
|
Sign up to set email alerts
|

Artificial neural network flood prediction for sungai isap residence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 9 publications
0
8
0
2
Order By: Relevance
“…The works implementing ANN are characterized for its simple implementation to predict values since it is possible the generation of an effective model to solve different issues as these models have series of time organized including suitable variables. This is observed in studies carried out in [13,14,15,16], and [17], where the authors demonstrate that using ANN to predict floods is adequate as the Root of the Square Mean Error (RSME) oscillates between 0.0007540 and 0.93. These studies were focused on those characteristics that might transform an artificial neuronal network into a more effective one; for instance, the analysis undertaken by Johannet et al [18], concluded that the results were improved when having hidden layers in the network.…”
Section: Introductionmentioning
confidence: 84%
See 2 more Smart Citations
“…The works implementing ANN are characterized for its simple implementation to predict values since it is possible the generation of an effective model to solve different issues as these models have series of time organized including suitable variables. This is observed in studies carried out in [13,14,15,16], and [17], where the authors demonstrate that using ANN to predict floods is adequate as the Root of the Square Mean Error (RSME) oscillates between 0.0007540 and 0.93. These studies were focused on those characteristics that might transform an artificial neuronal network into a more effective one; for instance, the analysis undertaken by Johannet et al [18], concluded that the results were improved when having hidden layers in the network.…”
Section: Introductionmentioning
confidence: 84%
“…In equation (13), the ( ) corresponds to the reference MSE as the obtained with the simplest model. The simplest possible model would be to always predict the average of all samples.…”
Section: Performance Metricmentioning
confidence: 99%
See 1 more Smart Citation
“…Three different optimization algorithms named LM, back-propagation, GD, and BR with back-propagation for optimizing the ANN result. The result of the prediction of BR is satisfactory [Keong et al 2016]. Eight different ML models are implemented and their results were compared.…”
Section: Related Workmentioning
confidence: 99%
“…Determination of the best ANN topology is important because it affects the weight and bias. Usually it performed by trial and error [19][20] or one-variable-at-time (OVAT) [21][22] where this procedure is very time-consuming and monotonous task. According to [23] for three different level of each ANN variables, about 245 (=3 5 ) different configuration of ANN would be required.…”
Section: Introductionmentioning
confidence: 99%