2010
DOI: 10.1002/qj.656
|View full text |Cite
|
Sign up to set email alerts
|

A new equitable score suitable for verifying precipitation in numerical weather prediction

Abstract: A new equitable score is developed for monitoring precipitation forecasts and for guiding forecast system development. To accommodate the difficult distribution of precipitation, the score measures the error in 'probability space' through use of the climatological cumulative distribution function. For sufficiently skilful forecasting systems, the new score is less sensitive to sampling uncertainty than other established scores. It is therefore called here the 'Stable Equitable Error in Probability Space' (SEEP… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
63
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(64 citation statements)
references
References 24 publications
(38 reference statements)
1
63
0
Order By: Relevance
“…Thus the importance of using benchmarks that are known and understood is essential in assessing how 'good' forecasts are (Seibert, 2001;Garrick et al, 1978;Martinec and Rango, 1989;Murphy and Winkler, 1987;Schaefli and Gupta, 2007). There is a wealth of literature on comparing models or forecasts, developing techniques to evaluate skill and on the use of benchmarks in hydro-meteorological forecasting Dawson et al, 2007;Ewen, 2011;Gordon et al, 2000;Nicolle et al, 2013;Pappenberger and Beven, 2004;Pappenberger et al, 2011a;Rodwell et al, 2010;Rykiel, 1996). Although there is surprisingly little consensus on which benchmarks are most suited for which application, benchmark suitability has been found to depend on the model structure used in the forecasting system, the season, catchment characteristics, river regime and flow conditions.…”
Section: Which Benchmark?mentioning
confidence: 99%
“…Thus the importance of using benchmarks that are known and understood is essential in assessing how 'good' forecasts are (Seibert, 2001;Garrick et al, 1978;Martinec and Rango, 1989;Murphy and Winkler, 1987;Schaefli and Gupta, 2007). There is a wealth of literature on comparing models or forecasts, developing techniques to evaluate skill and on the use of benchmarks in hydro-meteorological forecasting Dawson et al, 2007;Ewen, 2011;Gordon et al, 2000;Nicolle et al, 2013;Pappenberger and Beven, 2004;Pappenberger et al, 2011a;Rodwell et al, 2010;Rykiel, 1996). Although there is surprisingly little consensus on which benchmarks are most suited for which application, benchmark suitability has been found to depend on the model structure used in the forecasting system, the season, catchment characteristics, river regime and flow conditions.…”
Section: Which Benchmark?mentioning
confidence: 99%
“…An example of monitoring progress in NWP, citing resolution as one aspect of improvements in skills scores, is shown in Fig. 10 of Rodwell et al (2010). A more general, high-level review of the benefits of resolution in NWP models is provided by Wedi (2014).…”
mentioning
confidence: 99%
“…See for example Table 6 in Zhang et al (2012a) for a non-exhaustive list of such parameters. Ongoing research continuously adds to such procedures (e.g., Rodwell et al, 2010;Ferro and Stevenson, 2011). Similar procedures may be used with CCMM to evaluate the improvement provided by data assimilation in a forecasting mode (e.g., see case studies in Sects.…”
Section: Verification Of the Data Assimilation Processmentioning
confidence: 99%