2019
DOI: 10.5194/hess-23-4011-2019
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking the predictive capability of hydrological models for river flow and flood peak predictions across over 1000 catchments in Great Britain

Abstract: Abstract. Benchmarking model performance across large samples of catchments is useful to guide model selection and future model development. Given uncertainties in the observational data we use to drive and evaluate hydrological models, and uncertainties in the structure and parameterisation of models we use to produce hydrological simulations and predictions, it is essential that model evaluation is undertaken within an uncertainty analysis framework. Here, we benchmark the capability of several lumped hydrol… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
96
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 75 publications
(99 citation statements)
references
References 79 publications
3
96
0
Order By: Relevance
“…This model is the MARRMoT version of the Xinanjiang model Zhao, 1992), modified with a unique feature not seen in any other model in our sample, namely a double parabolic curve that is used to represent the fraction of the catchment that contributes to free drainage (Jayawardena & Zhou, 2000). Nonlinear treatment of saturated area representation has been linked to more flexible model performance within groundwater-dominated catchments before (Lane et al, 2019) and we can speculate that this specific double parabolic formulation gives the model a unique capability that allows it to perform well in a wide variety of catchments. Interestingly, it is difficult to generalize these findings because for every model (including m28) certain catchments can be found where that model is one of the best structures (in terms of efficiency scores during evaluation) and equally catchments can be found where that model is one of the worst options (Figure 9 Perrin et al, 2001, presents a similar finding).…”
Section: Synthesismentioning
confidence: 99%
“…This model is the MARRMoT version of the Xinanjiang model Zhao, 1992), modified with a unique feature not seen in any other model in our sample, namely a double parabolic curve that is used to represent the fraction of the catchment that contributes to free drainage (Jayawardena & Zhou, 2000). Nonlinear treatment of saturated area representation has been linked to more flexible model performance within groundwater-dominated catchments before (Lane et al, 2019) and we can speculate that this specific double parabolic formulation gives the model a unique capability that allows it to perform well in a wide variety of catchments. Interestingly, it is difficult to generalize these findings because for every model (including m28) certain catchments can be found where that model is one of the best structures (in terms of efficiency scores during evaluation) and equally catchments can be found where that model is one of the worst options (Figure 9 Perrin et al, 2001, presents a similar finding).…”
Section: Synthesismentioning
confidence: 99%
“…A reliable model ideally reproduces different aspects of flooding, including local characteristics such as event magnitude and timing. It has been shown, however, that capturing magnitude and timing is challenging when standard calibration metrics are used individually for parameter estimation (Lane et al, 2019;Brunner and Sikorska, 2018;Mizukami et al, 2019). For example, one widely-used metric that is considered integrative compared to others (e.g., bias, correlation) is the Nash-Sutcliffe efficiency (E NS ; Nash and Sutcliffe 1970), but it is formulated so that its optimal value actually underestimates flow variability (Gupta et al, 2009).…”
mentioning
confidence: 99%
“…Although still based on empirical models, such estimates may provide a useful benchmark to evaluate the performance of GHMs. Nevertheless, we need to keep in mind that all observational data are uncertain (Beven et al 2019) and it is therefore essential that model evaluation is undertaken within an uncertainty analysis framework (Lane et al 2019).…”
Section: Sources Of Biasmentioning
confidence: 99%
“…First of all, we feel that the community will benefit from developing a common language and clear framework for the treatment of uncertainties in climate impact assessments. For model development, there is a need to explore parameter and structural uncertainty in a more consistent manner, akin to the perturbed parameter ensembles used in climate modelling and the approach used by Lane et al (2019) for river flow and flood prediction in Great Britain, as opposed to the current "ensembles of opportunity." At the same time, MIPs could be exploited more fully to better understand the mechanisms and processes that lead to different responses in the models and could explain part of the uncertainty in climate impact projections.…”
Section: Recommendationsmentioning
confidence: 99%