2009
DOI: 10.1029/2008wr006825
|View full text |Cite
|
Sign up to set email alerts
|

Critical evaluation of parameter consistency and predictive uncertainty in hydrological modeling: A case study using Bayesian total error analysis

Abstract: [1] The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall-runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

10
346
1
1

Year Published

2011
2011
2014
2014

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 318 publications
(358 citation statements)
references
References 33 publications
10
346
1
1
Order By: Relevance
“…A perfect match of 100 % can then not be achieved if the simulated uncertainty is overestimated. More complex measures, such as a PQQ-plot (Thyer et al, 2009) or a rank histogram, analyse the quantiles of the observed value in the simulated distribution. The generalised rank histogram (McMillan et al, 2010) is an extension of the rank histogram that compares two uncertain distributions so that uncertainty in the observed data can be accounted for.…”
Section: Posterior Analysis Of Simulated and Observed Dischargesmentioning
confidence: 99%
See 1 more Smart Citation
“…A perfect match of 100 % can then not be achieved if the simulated uncertainty is overestimated. More complex measures, such as a PQQ-plot (Thyer et al, 2009) or a rank histogram, analyse the quantiles of the observed value in the simulated distribution. The generalised rank histogram (McMillan et al, 2010) is an extension of the rank histogram that compares two uncertain distributions so that uncertainty in the observed data can be accounted for.…”
Section: Posterior Analysis Of Simulated and Observed Dischargesmentioning
confidence: 99%
“…Uncertainty in discharge data, which has been shown to be sometimes substantial (Di Baldassarre and Montanari, 2009;Pelletier, 1988;Krueger et al, 2010;PetersenOverleir et al, 2009) and influence the calibration of hydrological models (McMillan et al, 2010;Aronica et al, 2006), is usually not accounted for in model evaluation with traditional performance measures. Novel approaches in environmental modelling that include evaluation-data uncertainty in model calibration include Bayesian calibration to an estimated probability-density function of discharge (McMillan et al, 2010), Bayesian calibration with a simplified error model (Huard and Mailhot, 2008;Thyer et al, 2009), fuzzy rule based performance measures (Freer et al, 2004) and limits-of-acceptability calibration in GLUE for rainfallrunoff modelling (Liu et al, 2009), flood mapping (Pappenberger et al, 2007), environmental tracer modelling (Page et al, 2007) and flood-frequency estimation (Blazkova and Beven, 2009). Here we explore the limits-of-acceptability GLUE approach applied to flow-duration curves, which could be a way of dealing with some of the effects of nonstationary epistemic errors on the identification of feasible model parameters in real applications (Beven, 2006(Beven, , 2010Beven and Westerberg, 2011;Beven et al, 2008).…”
Section: Introductionmentioning
confidence: 99%
“…The underlying motivation is to compare two model structures in terms of its deficiencies in representing the underlying processes (''truth''). In contrast to Bayesian approaches to model selection [such as Kavetski et al, 2006;Thyer et al, 2009;Schoups and Vrugt, 2010] where various sources of errors can be explicitly modeled, no assumption on the cumulative distribution of the residuals is made, where a distribution of residuals is due to unknown measurement errors and model structure deficiency.…”
Section: Introductionmentioning
confidence: 99%
“…These methods are powerful and yield useful insights for improving model structures. A validation of assumptions is generally made by Q-Q plots by mapping observed quantiles to prediction quantiles for a variable of interest [Thyer et al, 2009;Schoups and Vrugt, 2010]. A Q-Q plot verifies whether the prediction quantiles follow the observed quantiles, thereby assessing the applicability of the model assumptions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation