2022
DOI: 10.3390/w14142221
|View full text |Cite
|
Sign up to set email alerts
|

Water Level Prediction Model Applying a Long Short-Term Memory (LSTM)–Gated Recurrent Unit (GRU) Method for Flood Prediction

Abstract: The damage caused by floods is increasing worldwide, and if floods can be predicted, the economic and human losses from floods can be reduced. A key parameter of flooding is water level data, and this paper proposes a water level prediction model using long short-term memory (LSTM) and a gated recurrent unit (GRU). As variables used as input data, meteorological data, including upstream and downstream water level, temperature, humidity, and precipitation, were used. The best results were obtained when the LSTM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
22
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(24 citation statements)
references
References 42 publications
(42 reference statements)
2
22
0
Order By: Relevance
“…The BiLSTM network structure is illustrated in Figure 5. In Figure 5, the LSTM layer incorporates three key gates: the forget, input, and output gates [26]. The forget gate is represented by 𝑓 ; the input gate is represented by 𝑖 ; and the output gate is represented by 𝑜 .…”
Section: U-net-bilstm Network Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…The BiLSTM network structure is illustrated in Figure 5. In Figure 5, the LSTM layer incorporates three key gates: the forget, input, and output gates [26]. The forget gate is represented by 𝑓 ; the input gate is represented by 𝑖 ; and the output gate is represented by 𝑜 .…”
Section: U-net-bilstm Network Architecturementioning
confidence: 99%
“…The input gate controls how much information about the current state is input into the memory cell, and the output gate regulates the extent of output based on the current cell state 𝑐 . The specific implementation formulas are presented as follows: In Figure 5, the LSTM layer incorporates three key gates: the forget, input, and output gates [26]. The forget gate is represented by f t ; the input gate is represented by i t ; and the output gate is represented by o t .…”
Section: U-net-bilstm Network Architecturementioning
confidence: 99%
“…In RNN, a node contains only one neuron, while LSTMs and GRUs introduce an internal mechanism named “gate” that modulates the flow of previous information. [ 40,41 ] The fundamental concept of LSTM is the cell state and its gates. [ 42 ] The cell state plays the role of a pathway that transfers information along the sequence chain, as the memory in the network.…”
Section: From Artificial Neural Network To Physical Reservoir Computingmentioning
confidence: 99%
“…The former assigns different weights to the hidden layer outputs at different times and then uses weighted summation to obtain an RNN context vector. The latter can be thought of as assigning different attention weights to various dimensions of the output vector [38,39]. Thus, this paper proposes the attention mechanism based on BiGRU to address the issues of reduced prediction accuracy due to the extended time span in the water-level-prediction task and insufficient spatial information utilization in the water-level-prediction task.…”
Section: Spatial-reduction Attentionmentioning
confidence: 99%
“…This experiment took the average of the water levels at six points surrounding the empty item, such as x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 , where x 4 was the empty item, and the filling of x 4 was (x 1 + x 2 + x 3 + x 5 + x 6 + x 7 )/6. Finally, data outliers such as extremely large and small water-level-observation values that clearly deviate from the average level of the series were eliminated and filled in using the data missing processing method [39]. Furthermore, to hasten the convergence of the proposed model and improve its accuracy, maximum-minimum normalization was used so that all values are compressed within the interval [0, 1] [30].…”
Section: Data Processingmentioning
confidence: 99%