2007
DOI: 10.5194/hess-11-1267-2007
|View full text |Cite
|
Sign up to set email alerts
|

Verification tools for probabilistic forecasts of continuous hydrological variables

Abstract: Abstract. In the present paper we describe some methods for verifying and evaluating probabilistic forecasts of hydrological variables. We propose an extension to continuous-valued variables of a verification method originated in the meteorological literature for the analysis of binary variables, and based on the use of a suitable cost-loss function to evaluate the quality of the forecasts. We find that this procedure is useful and reliable when it is complemented with other verification tools, borrowed from t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
210
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 308 publications
(227 citation statements)
references
References 33 publications
1
210
0
Order By: Relevance
“…Most current diagnostic efforts rely on evaluation of flow time series (Maurer et al, 2007;Moss, 1979a, b), which represent only a small component of the information within a model. New metrics are needed that can evaluate the ability of models to represent non-stationary water systems (Laio and Tamea, 2007;Montanari and Koutsoyiannis, 2012;Sikorska et al, 2013). Finally, observations will be essential to identify the presence of events and trends that were not predicted.…”
Section: Challenge 3: Uncertainty Predictability and Observations Ofmentioning
confidence: 99%
“…Most current diagnostic efforts rely on evaluation of flow time series (Maurer et al, 2007;Moss, 1979a, b), which represent only a small component of the information within a model. New metrics are needed that can evaluate the ability of models to represent non-stationary water systems (Laio and Tamea, 2007;Montanari and Koutsoyiannis, 2012;Sikorska et al, 2013). Finally, observations will be essential to identify the presence of events and trends that were not predicted.…”
Section: Challenge 3: Uncertainty Predictability and Observations Ofmentioning
confidence: 99%
“…Reliability was evaluated with the probability integral transform (PIT; Gneiting et al, 2007;Laio and Tamea, 2007) diagram. The PIT diagram represents the cumulative distribution of the positions of the observation within the distribution of forecast values.…”
Section: Evaluation Of Forecast Attributesmentioning
confidence: 99%
“…The PIT represents the non-exceedance probability of observed streamflow obtained from the CDF of the ensemble forecast. If the forecast ensemble spread is 5 appropriate and free of bias then observations will be contained within the forecast ensemble spread, with reliable forecasts having PIT values that follow a uniform distribution between 0 and 1 (Laio and Tamea, 2007).…”
Section: Verificationmentioning
confidence: 99%
“…We use PIT plots for verification of the reliability and robustness of the forecast probability distributions, to assess whether there are biases in the forecasts, or whether the forecast probability distributions are too wide or too narrow (Laio and Tamea, 2007) Robustness is also assessed by plotting forecast quantile ranges and observed flows against the forecast median (Figure 3d and Figure 4d) and chronologically (Figure 3e and Figure 4e, for Jhelum at Mangla and Indus at Tarbela respectively). These show that BJP forecasts reasonably account for the range of observed variability for both locations.…”
Section: Performance Diagnosticsmentioning
confidence: 99%