2017
DOI: 10.1175/mwr-d-16-0487.1
|View full text |Cite
|
Sign up to set email alerts
|

Sample Stratification in Verification of Ensemble Forecasts of Continuous Scalar Variables: Potential Benefits and Pitfalls

Abstract: In the verification field, stratification is the process of dividing the sample of forecast–observation pairs into quasi-homogeneous subsets, in order to learn more on how forecasts behave under specific conditions. A general framework for stratification is presented for the case of ensemble forecasts of continuous scalar variables. Distinction is made between forecast-based, observation-based, and external-based stratification, depending on the criterion on which the sample is stratified. The formalism is app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
8
1

Relationship

4
5

Authors

Journals

citations
Cited by 30 publications
(29 citation statements)
references
References 41 publications
0
24
0
Order By: Relevance
“…Forecast stratification (Broecker, 2008), as the process of dividing the whole dataset into different subsets and computing verification metrics for each subset, has been introduced as a way to better diagnose where the deficiencies of the forecast system lie. Bellier et al (2017) compared different strategies for the stratification criteria, based on either the observations or the forecasts, and justified the use of a forecast-based stratification criteria for verification rank histograms. Indeed, they showed that conditionning the rank histogram to observations is likely to draw 20 erroneous conclusions about the real behaviour of ensemble forecasts.…”
mentioning
confidence: 99%
“…Forecast stratification (Broecker, 2008), as the process of dividing the whole dataset into different subsets and computing verification metrics for each subset, has been introduced as a way to better diagnose where the deficiencies of the forecast system lie. Bellier et al (2017) compared different strategies for the stratification criteria, based on either the observations or the forecasts, and justified the use of a forecast-based stratification criteria for verification rank histograms. Indeed, they showed that conditionning the rank histogram to observations is likely to draw 20 erroneous conclusions about the real behaviour of ensemble forecasts.…”
mentioning
confidence: 99%
“…8.) Note that the stratification should be based on the forecasts (e.g., raw forecasts mean as we used here) instead of the observations to ensure the reliability for each stratum (Bellier et al 2017;Lerch et al 2017). More details of the verification metrics are described in the appendix A and related references (Wilks 2011).…”
Section: B Experimental Design and Verification Methodsmentioning
confidence: 99%
“…First, the PEARP reforecasts consist of 10 members, including 1 control member, and were issued in 2018. The reforecasts (Boisserie et al, 2016) are based on a homogeneous model configuration identical to the operational release of 5 December 2017 (same resolution and physical parameterizations), but they only include physical perturbations and no perturbation of the initial state, contrary to operational PEARP forecasts. The initial states are built with ERA-Interim reanalysis (Dee et al, 2011) for the atmospheric variables and by the 24 h stand-alone coupled forecasts of the SURFEX/ARPEGE model for the Earth parameters.…”
Section: Reforecasts Used For Trainingmentioning
confidence: 99%