2017
DOI: 10.1002/qj.3115
|View full text |Cite
|
Sign up to set email alerts
|

Measuring forecast performance in the presence of observation error

Abstract: A new framework is introduced for measuring the performance of probability forecasts when the true value of the predictand is observed with error. In these circumstances, proper scoring rules favour good forecasts of observations rather than of truth and yield scores that vary with the quality of the observations. Proper scoring rules thus can favour forecasters who issue worse forecasts of the truth and can mask real changes in forecast performance if observation quality varies over time. Existing approaches … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
42
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(43 citation statements)
references
References 35 publications
(67 reference statements)
1
42
0
Order By: Relevance
“…A potential explanation arises from differences in the effect of having neglected uncertainty of the verification data. Numerous studies have pointed out the need to account for observation uncertainty in the verification of ensembles (e.g., Saetra et al, 2004;Yamaguchi et al, 2016;Ferro, 2017). The error of representativeness of 2 m temperature observations at stations is expected to be significantly larger than the analysis uncertainty of 850 hPa temperature.…”
Section: Resultsmentioning
confidence: 99%
“…A potential explanation arises from differences in the effect of having neglected uncertainty of the verification data. Numerous studies have pointed out the need to account for observation uncertainty in the verification of ensembles (e.g., Saetra et al, 2004;Yamaguchi et al, 2016;Ferro, 2017). The error of representativeness of 2 m temperature observations at stations is expected to be significantly larger than the analysis uncertainty of 850 hPa temperature.…”
Section: Resultsmentioning
confidence: 99%
“…, ; Bellprat et al. , ; Ferro, ). However, for this study, while absolute values of skill were dependent on the reference reanalysis, the forecast improvements gained by applying stochastic methods for model uncertainty estimation were more or less independent of that, providing robust indicators of skill increase.…”
Section: Discussionmentioning
confidence: 99%
“…The idea is to use climate models as a third-party source of information to infer the statistics of observational errors. The rationale behind this argument is simple: standard skill scores used in forecast verification (e.g., correlation, root mean square error, Brier score) are sensitive to errors in both the forecast and the verification data (87,88). If one particular observational verification product is corrupted with larger errors than other products, this observational product should systematically stand out compared to others, when inspecting the forecast skill scores of model predictions.…”
Section: Case 4 Evaluation Of Observational Productsmentioning
confidence: 99%