2018
DOI: 10.1016/j.ijforecast.2018.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Combining predictive distributions for the statistical post-processing of ensemble forecasts

Abstract: Statistical post-processing techniques are now widely used to correct systematic biases and errors in calibration of ensemble forecasts obtained from multiple runs of numerical weather prediction models. A standard approach is the ensemble model output statistics (EMOS) method, a distributional regression approach where the forecast distribution is given by a single parametric law with parameters depending on the ensemble members. Choosing an appropriate parametric family for the weather variable of interest i… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
55
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1

Relationship

4
3

Authors

Journals

citations
Cited by 56 publications
(56 citation statements)
references
References 52 publications
(90 reference statements)
1
55
0
Order By: Relevance
“…Having to describe the distribution of the target variable in parametric techniques is a nontrivial task. For temperature, a Gaussian distribution is a good approximation but for other variables, such as wind speed or precipitation, finding a distribution that fits the data is a substantial challenge (e.g., Taillardat et al, 2016;Baran and Lerch, 2018). Ideally, a machine learning algorithm would learn to predict the full probability distribution rather than distribution parameters only.…”
Section: Discussionmentioning
confidence: 99%
“…Having to describe the distribution of the target variable in parametric techniques is a nontrivial task. For temperature, a Gaussian distribution is a good approximation but for other variables, such as wind speed or precipitation, finding a distribution that fits the data is a substantial challenge (e.g., Taillardat et al, 2016;Baran and Lerch, 2018). Ideally, a machine learning algorithm would learn to predict the full probability distribution rather than distribution parameters only.…”
Section: Discussionmentioning
confidence: 99%
“…As suggested by Gneiting and Ranjan (), to assess the statistical significance of the differences between the verification scores, we make use of the Diebold–Mariano (DM) test of equal predictive performance, as it allows us to account for the temporal dependencies in the forecast errors (Diebold and Mariano, ). Baran and Lerch () give more details about the DM test. Further, confidence intervals for mean score values and mean score differences are obtained with the help of 2,000 block bootstrap samples using the stationary bootstrap scheme with mean block length according to Politis and Romano ().…”
Section: Ensemble Model Output Statisticsmentioning
confidence: 99%
“…To recalibrate, authors recommend transforming the aggregated forecast distribution. The Spread-adjusted Linear Pool (SLP) (Berrocal et al, 2007;Glahn et al, 2009;Kleiber et al, 2011) transforms each individual distribution before combining, the Beta Linear Pool (BLP) applies a beta transform to the final combined distribution Gneiting et al (2013); Ranjan and Gneiting (2010), and a more flexible infinite mixture version of the BLP Bassetti et al (2018), mixture of Normal densities Baran and Lerch (2018), and empirical cumulative distribution function Garratt et al (2019) also aim to recalibrate forecasts made from a combination of predictive densities.…”
Section: Recent Work In Combination Forecastingmentioning
confidence: 99%