2021
DOI: 10.48550/arxiv.2111.07470
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Skillful Twelve Hour Precipitation Forecasts using Large Context Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(36 citation statements)
references
References 0 publications
1
35
0
Order By: Relevance
“…Hybrid Learning (HyL), or gray box modelling as called in its early days in the 90's (Psichogios & Ungar, 1992;Rico-Martinez et al, 1994;Thompson & Kramer, 1994;Rivera-Sampayo & Vélez-Reyes, 2001;Braun & Chaturvedi, 2002), has been an appropriate method to learn models that are both expressive and interpretable, while also allowing them to be learnt on fewer data. The interest for HyL (Mehta et al, 2020;Lei & Mirams, 2021;Reichstein et al, 2019;Saha et al, 2020;Guen & Thome, 2020;Levine & Stuart, 2021;Espeholt et al, 2021) has greatly renewed since the outbreak of recent neural network architectures that simplify the combination of physical equations within ML models. As an example, Neural ODE (Chen et al, 2018) and convolutional neural networks (LeCun et al, 1995, CNN) are privileged architectures to work with dynamical systems described by ODEs or PDEs.…”
Section: Hybrid Modellingmentioning
confidence: 99%
“…Hybrid Learning (HyL), or gray box modelling as called in its early days in the 90's (Psichogios & Ungar, 1992;Rico-Martinez et al, 1994;Thompson & Kramer, 1994;Rivera-Sampayo & Vélez-Reyes, 2001;Braun & Chaturvedi, 2002), has been an appropriate method to learn models that are both expressive and interpretable, while also allowing them to be learnt on fewer data. The interest for HyL (Mehta et al, 2020;Lei & Mirams, 2021;Reichstein et al, 2019;Saha et al, 2020;Guen & Thome, 2020;Levine & Stuart, 2021;Espeholt et al, 2021) has greatly renewed since the outbreak of recent neural network architectures that simplify the combination of physical equations within ML models. As an example, Neural ODE (Chen et al, 2018) and convolutional neural networks (LeCun et al, 1995, CNN) are privileged architectures to work with dynamical systems described by ODEs or PDEs.…”
Section: Hybrid Modellingmentioning
confidence: 99%
“…This is especially relevant in our application, where we use global gridded data and try to directly incorporate and model existing global teleconnections with the potential of uncovering new ones. The importance of having a wider field-of-view was found critical in Espeholt et al 13 using dilated convolutions, but deformable convolutions are actually a generalization of dilated convolutions 19 .…”
Section: Deformable Convolutional Neural Networkmentioning
confidence: 99%
“…We use BO for each lead time independently, meaning we will potentially have a unique set of hyperparameters for each lead time. We could also have trained an aggregate model for all lead times with appropriate lead time encoding similar to other studies 13 . However, this tends to yield similar performance 13 and is mostly relevant when training time is a bottleneck.…”
Section: /12mentioning
confidence: 99%
See 2 more Smart Citations