2023
DOI: 10.1029/2022wr032789
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Automatic Hyperparameter Tuning on the Performance of Multi‐Variate Deep Learning‐Based Rainfall Nowcasting

Abstract: Climate change has increased the importance of rainfall forecasting, because it has changed the duration and frequency of floods and increased their magnitude Ashrafi, Khoie, et al., 2022). As a result of inundation, landslides, and debris flows in urban areas, heavy rainfall often leads to fatalities and significant financial damages (Seo et al., 2014). Furthermore, overflow resulting from heavy rainfall is a primary contributor to the transport of water pollutants (Lintern et al., 2018). Various preventive m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 57 publications
(76 reference statements)
0
1
0
Order By: Relevance
“…For example, the parameters of the ConvLSTM network are the number of hidden units, the filter size, mini-batch size, learning rate, and the dropout probability. In general, applying parameters using the rule of thumb or previous research results in the best results, but we have tried to find the best parameter to achieve optimal performance, using automatic parameter tuning for DNNs [ 47 ]. There are different automatic hyperparameter tuning algorithms including search, random search, and Bayesian optimization.…”
Section: Methodsmentioning
confidence: 99%
“…For example, the parameters of the ConvLSTM network are the number of hidden units, the filter size, mini-batch size, learning rate, and the dropout probability. In general, applying parameters using the rule of thumb or previous research results in the best results, but we have tried to find the best parameter to achieve optimal performance, using automatic parameter tuning for DNNs [ 47 ]. There are different automatic hyperparameter tuning algorithms including search, random search, and Bayesian optimization.…”
Section: Methodsmentioning
confidence: 99%
“…Table 1 presents the hyperparameters utilized in the study. The utilization of the Adam (adaptive moment estimation) optimizer, an algorithm based on gradient descent that integrates the advantageous characteristics of AdaGrad and RMSProp algorithms, has proven to be effective in rainfall nowcasting and the gradual convergence to the minimum of the loss function (Amini et al, 2023). Additionally, it encompasses outliers (Kim and Han, 2020).…”
Section: Convolutional Long Short-term Memory (Convlstm)mentioning
confidence: 99%
“…The Rectified Linear Unit (ReLU) activation function has been observed to mitigate the issue of vanishing gradient. Activation functions such as ReLU and its derivatives, including Leaky ReLU, are frequently employed in the field of deep learning (Moishin et al, 2021;Kumar, 2021;Amini et al, 2023). ReLU provides fixed derivatives in positive values that allow the training process to continue even when input values are extreme (Kim & Han, 2020).…”
Section: Convolutional Long Short-term Memory (Convlstm)mentioning
confidence: 99%