Streamflow data are essential for the calibration of continuous rainfall-runoff (RR) models. The quantity and quality of streamflow data can significantly influence parameter calibration and thus model robustness. Most existing sensitivity analysis studies on the role of streamflow data have used continuous periods to calibrate model parameters, with a minimum of one year, though ideally much longer periods are generally advised. However, in practical model applications, streamflow data series available for model calibration may be rather short or non-continuous. This study aims at assessing the sensitivity of continuous RR models to the quantity of information used during model calibration when it is randomly sampled in the observed hydrograph, i.e. using non-continuous calibration periods. This sampling provides less auto-correlated streamflow information for model calibration than continuous records. Two daily RR models with four and six free parameters were tested on a sample of 12 basins in the USA to obtain more general conclusions. The results showed that, in general, 350 calibration days sampled out of a longer data set including dry and wet conditions are sufficient to obtain robust estimates of model parameters. The more parsimonious model requires fewer calibration data to obtain stable and robust parameter values. Stable parameter values prove more difficult to reach in the driest catchments.Key words rainfall-runoff modelling; calibration; sampling; sensitivity analysis; streamflow data Impact d'une limitation de données de débit sur l'efficacité et les paramètres de modèles pluie-débit Résumé Les données de débit sont essentielles pour caler les modèles pluie-débit continus. La quantité et la qualité des données peuvent influencer de manière significative le calage des paramètres et donc la robustesse du modèle. La plupart des analyses de sensibilité ayant abordé le rôle des données de débit ont utilisé des périodes continues pour le calage des paramètres des modèles, avec un minimum d'une année, bien qu'idéalement des chroniques beaucoup plus longues soient généralement conseillées. Cependant, dans des applications pratiques de modélisation, les séries de données disponibles pour le calage des modèles sont courtes ou non-continues. L'objectif de cette étude est d'évaluer la sensibilité des modèles pluie-débit continus à la quantité d'information utilisée pour leur calage lorsqu'elle est échantillonnée aléatoirement dans l'hydrogramme observé, c'est-à-dire en utilisant des périodes de calage non-continues. Cet échantillonnage fournit une information de débit moins corrélée qu'une série continue pour le calage. Nous avons utilisé ici deux modèles pluie-débit avec quatre et six paramètres, et nous les avons testés sur un échantillon de 12 bassins aux Etats-Unis, pour obtenir des conclusions plus générales. Les résultats montrent qu'en général, 350 jours de calage tirés dans une série plus longue comprenant des conditions sèches et humides sont suffisants pour obtenir des valeurs robustes des...
[1] In this paper, we analyze how our evaluation of the capacity of a rainfall-runoff model to represent low or high flows depends on the objective function used during the calibration process. We present a method to combine models to produce a more satisfactory streamflow simulation, on the basis of two different parameterizations of the same model. Where we previously had to choose between a more efficient simulation for either high flows or low flows (but inevitably less efficient in the other range), we show that a balanced simulation can be obtained by using a seasonal index to weigh the two simulations, providing good efficiency in both low and high flows.
Skillful and timely streamflow forecasts are critically important to water managers and emergency protection services. To provide these forecasts, hydrologists must predict the behavior of complex coupled human-natural systems using incomplete and uncertain information and imperfect models. Moreover, operational predictions often integrate anecdotal information and unmodeled factors. Forecasting agencies face four key challenges: 1) making the most of available data, 2) making accurate predictions using models, 3) turning hydrometeorological forecasts into effective warnings, and 4) administering an operational service. Each challenge presents a variety of research opportunities, including the development of automated quality-control algorithms for the myriad of data used in operational streamflow forecasts, data assimilation, and ensemble forecasting techniques that allow for forecaster input, methods for using humangenerated weather forecasts quantitatively, and quantification of human interference in the hydrologic cycle. Furthermore, much can be done to improve the communication of probabilistic forecasts and to design a forecasting paradigm that effectively combines increasingly sophisticated forecasting technology with subjective forecaster expertise. These areas are described in detail to share a real-world perspective and focus for ongoing research endeavors.
Testing hydrological models under changing conditions is essential to evaluate their ability to cope with changing catchments and their suitability for impact studies. With this perspective in mind, a workshop dedicated to this issue was held at the 2013 General Assembly of the International Association of Hydrological Sciences (IAHS) in Göteborg, Sweden, in July 2013, during which the results of a common testing experiment were presented. Prior to the workshop, the participants had been invited to test their own models on a common set of basins showing varying conditions specifically set up for the workshop. All these basins experienced changes, either in physical characteristics (e.g. changes in land cover) or climate conditions (e.g. gradual temperature increase). This article presents the motivations and organization of this experiment-that is-the testing (calibration and evaluation) protocol and the common framework of statistical procedures and graphical tools used to assess the model performances. The basins datasets are also briefly introduced (a detailed description is provided in the associated Supplementary material).
There is a common agreement in the scientific community that communicating uncertain hydrometeorological forecasts to water managers, civil protection authorities and other stakeholders is far from being a resolved issue. This paper focuses on the communication of uncertain hydrological forecasts to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. Results from case studies conducted together with flood forecasting experts in Europe and operational forecasters from the hydroelectric sector in France are presented. They illustrate some key issues on dealing with probabilistic hydro-meteorological forecasts and communicating uncertainty in operational flood forecasting.
Abstract. As all hydrological models are intrinsically limited hypotheses on the behaviour of catchments, models -which attempt to represent real-world behaviour -will always remain imperfect. To make progress on the long road towards improved models, we need demanding tests, i.e. true crash tests. Efficient testing requires large and varied data sets to develop and assess hydrological models, to ensure their generality, to diagnose their failures, and ultimately, help improving them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.