2013
DOI: 10.1155/2013/485913
|View full text |Cite
|
Sign up to set email alerts
|

Using Ensemble of Neural Networks to Learn Stochastic Convection Parameterizations for Climate and Numerical Weather Prediction Models from Data Simulated by a Cloud Resolving Model

Abstract: A novel approach based on the neural network (NN) ensemble technique is formulated and used for development of a NN stochastic convection parameterization for climate and numerical weather prediction (NWP) models. This fast parameterization is built based on learning from data simulated by a cloud-resolving model (CRM) initialized with and forced by the observed meteorological data available for 4-month boreal winter from November 1992 to February 1993. CRM-simulated data were averaged and processed to implici… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

7
120
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 125 publications
(127 citation statements)
references
References 20 publications
7
120
0
Order By: Relevance
“…On the other hand, n hid = 5 makes -13-more than twice as much error in the 64-step prediction than n hid = 128 does. These results confirm that just fitting the apparent source does not ensure long-term accuracy, which requires using more neurons that Krasnopolsky et al [2013] suggest.…”
Section: Sensitivity To Hyper-parameterssupporting
confidence: 68%
See 4 more Smart Citations
“…On the other hand, n hid = 5 makes -13-more than twice as much error in the 64-step prediction than n hid = 128 does. These results confirm that just fitting the apparent source does not ensure long-term accuracy, which requires using more neurons that Krasnopolsky et al [2013] suggest.…”
Section: Sensitivity To Hyper-parameterssupporting
confidence: 68%
“…For fixed T, increasing the number of hidden neurons hardly improves the R 2 scores for the apparent heat sources. Even networks with n hid = 5 can explain 60% and 70% of the variance of the apparent sinks of moisture and temperature, respectively, as also found by Krasnopolsky et al [2013]. On the other hand, n hid = 5 makes -13-more than twice as much error in the 64-step prediction than n hid = 128 does.…”
Section: Sensitivity To Hyper-parameterssupporting
confidence: 57%
See 3 more Smart Citations