2020
DOI: 10.3390/forecast2020011
|View full text |Cite
|
Sign up to set email alerts
|

Performance Comparison between Deep Learning and Optical Flow-Based Techniques for Nowcast Precipitation from Radar Images

Abstract: In this article, a nowcasting technique for meteorological radar images based on a generative neural network is presented. This technique’s performance is compared with state-of-the-art optical flow procedures. Both methods have been validated using a public domain data set of radar images, covering an area of about 104 km2 over Japan, and a period of five years with a sampling frequency of five minutes. The performance of the neural network, trained with three of the five years of data, forecasts with a time … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…Among various network architectures applied to the weather nowcasting problem, we opted for the PredNet model. This paper is the evolution of a previous study [22] in which we analyzed the performance of the PredNet architecture, highlighting its excellent performance in merely deterministic terms, but also its limitations and the need for a revisiting in probabilistic terms. As specified, the architecture is borrowed from computer vision, likewise the optical flow methodology, which represents the state-of-theart architecture in the field.…”
Section: Prednetmentioning
confidence: 99%
See 1 more Smart Citation
“…Among various network architectures applied to the weather nowcasting problem, we opted for the PredNet model. This paper is the evolution of a previous study [22] in which we analyzed the performance of the PredNet architecture, highlighting its excellent performance in merely deterministic terms, but also its limitations and the need for a revisiting in probabilistic terms. As specified, the architecture is borrowed from computer vision, likewise the optical flow methodology, which represents the state-of-theart architecture in the field.…”
Section: Prednetmentioning
confidence: 99%
“…The basic idea of this work is to verify the feasibility and effectiveness of an approach that, by overcoming these limitations, manages to fully exploit the unquestionable predictive potential demonstrated by the neural networks, and by the PredNet [21,22] in particular. The chosen operational model is based on the interpretation of the neural network prediction as an optimal estimate of the mean of an ensemble of nowcasts which spread in turn is representative of the associated uncertainty.…”
Section: Introductionmentioning
confidence: 99%
“…is the number of the next iteration, and W r is the weight of the r ∈ {OP, B3D, f c} component. From Equations (8) and (9), the weight is updated by taking the partial derivative of the loss. Next, we introduce the partial derivative processes of several modules, including the f c, OP , and B3D modules, which are expressed as follows:…”
Section: A Sopnetmentioning
confidence: 99%
“…Moreover, satellite images are commonly used, and various CNNs have been applied to satellite images to identify and forecast precipitation intensity. For example, variants of popular CNNs, such as ResNet [6], U-Net [7], and GAN [8], have been used to analyze satellite and infrared images. Although standard CNNs can employ a 2D convolutional operator to extract and generate features of a series of images, standard CNNs cannot identify time-series relationships between images.…”
Section: Introductionmentioning
confidence: 99%
“…CNN is responsible for extracting spatial information. The deep learning extrapolation approaches usually perform better than the optical flow methods [13], because they do not have the unreasonable constant assumption and they can effectively leverage the valuable historical observations.…”
Section: Introductionmentioning
confidence: 99%