2022
DOI: 10.1175/mwr-d-21-0150.1
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Methods for Postprocessing Ensemble Forecasts of Wind Gusts: A Systematic Comparison

Abstract: Postprocessing ensemble weather predictions to correct systematic errors has become a standard practice in research and operations. However, only few recent studies have focused on ensemble postprocessing of wind gust forecasts, despite its importance for severe weather warnings. Here, we provide a comprehensive review and systematic comparison of eight statistical and machine learning methods for probabilistic wind gust forecasting via ensemble postprocessing, that can be divided in three groups: State of the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
45
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 40 publications
(58 citation statements)
references
References 51 publications
(57 reference statements)
1
45
1
Order By: Relevance
“…An important tuning parameter is the dimension of the station embeddings. While Rasp & Lerch (2018) use two-dimensional embeddings, subsequent research demonstrated the usefulness of choosing larger values (e.g., Bremnes, 2020;Schulz & Lerch, 2022). We chose an embedding dimension of 15 for all models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…An important tuning parameter is the dimension of the station embeddings. While Rasp & Lerch (2018) use two-dimensional embeddings, subsequent research demonstrated the usefulness of choosing larger values (e.g., Bremnes, 2020;Schulz & Lerch, 2022). We chose an embedding dimension of 15 for all models.…”
Section: Discussionmentioning
confidence: 99%
“…Supplementary Figure 3 shows the permutation-based feature importance of the 12 most important predictors of the DRN+ConvAE and the DRN model. To compute feature importances, we follow Rasp & Lerch (2018) and Schulz & Lerch (2022), and measure the decrease in terms of the CRPS in the test set when randomly permuting a single input feature, using the mean CRPS of the respective model based on unpermuted input features as reference.…”
Section: A3 Feature Importancementioning
confidence: 99%
“…If a forecast is well-calibrated, the empirical coverage should resemble the nominal coverage, and a forecast is the sharper, the smaller the length of the PI. The nominal level of the PIs is a tuning parameter for evaluation, we here choose the specific level of 19 21 ≈ 90.48% from the application in Schulz and Lerch (2022), which forms the basis of our case study in Section 5. Finally, we measure accuracy based on the mean forecast error of the median derived from the predictive distribution, FE(F, y) = median(F ) − y, y ∈ R, which is positive in case of overforecasting and negative for underforecasting.…”
Section: Assessing Predictive Performancementioning
confidence: 99%
“…Using theoretical arguments, simulation experiments and a case study on probabilistic wind gust forecasting, we systematically investigate and compare aggregation methods for probabilistic forecasts based on deep ensembles, with different ways to characterize the corresponding forecast distributions. This study is motivated by and based on our work in Schulz and Lerch (2022), where we use ensembles of NNs to statistically postprocess probabilistic forecasts for the speed of wind gusts and propose a common framework of NN-based probabilistic forecasting methods with different types of forecast distributions. In the following, we apply a two-step procedure by first generating an ensemble of probabilistic forecasts and then aggregating them into a single final forecast, which matches the typical workflow of forecast combination from a forecasting perspective.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, ensembles have to be post-processed (Hemri et al, 2014;Steininger et al, 2020) tics to correct model biases and predict unmodelled variables. Often, summarized ensemble statistics are targeted by post-processing (Schulz & Lerch, 2021), predicting either the parameters (Gneiting et al, 2005;Raftery et al, 2005;Rasp & Lerch, 2018) or the cumulative distribution function (Baran & Lerch, 2018;Bremnes, 2020;Scheuerer et al, 2020;Taillardat et al, 2016) of the target distribution. As a consequence, the member-wise multivariate and spatialcoherent representation of the ensemble forecast is lost.…”
Section: Introductionmentioning
confidence: 99%