Abstract. Three types of functionally different artificial neural network (ANN) models are calibrated using a relatively short length of groundwater level records and related hydrometeorological data to simulate water table fluctuations in the Gondo aquifer, Burkina Faso. Input delay neural network (IDNN) with static memory structure and globally recurrent neural network (RNN) with inherent dynamical memory are proposed for monthly water table fluctuations modeling. The simulation performance of the IDNN and the RNN models is compared with results obtained from two variants of radial basis function (RBF) networks, namely, a generalized RBF model (GRBF) and a probabilistic neural network (PNN). Overall, simulation results suggest that the RNN is the most efficient of the ANN models tested for a calibration period as short as 7 years. The results of the IDNN and the PNN are almost equivalent despite their basically different learning procedures. The GRBF performs very poorly as compared to the other models. Furthermore, the study shows that RNN may offer a robust framework for improving water supply planning in semiarid areas where aquifer information is not available. This study has significant implications for groundwater management in areas with inadequate groundwater monitoring network.
This paper traces two decades of neural network rainfall-runoff and streamflow modelling, collectively termed ‘river forecasting’. The field is now firmly established and the research community involved has much to offer hydrological science. First, however, it will be necessary to converge on more objective and consistent protocols for: selecting and treating inputs prior to model development; extracting physically meaningful insights from each proposed solution; and improving transparency in the benchmarking and reporting of experimental case studies. It is also clear that neural network river forecasting solutions will have limited appeal for operational purposes until confidence intervals can be attached to forecasts. Modular design, ensemble experiments, and hybridization with conventional hydrological models are yielding new tools for decision-making. The full potential for modelling complex hydrological systems, and for characterizing uncertainty, has yet to be realized. Further gains could also emerge from the provision of an agreed set of benchmark data sets and associated development of superior diagnostics for more rigorous intermodel evaluation. To achieve these goals will require a paradigm shift, such that the mass of individual isolated activities, focused on incremental technical refinement, is replaced by a more coordinated, problem-solving international research body.
When evaluating the reliability of an ensemble prediction system, it is common to compare the root-mean-square error of the ensemble mean to the average ensemble spread. While this is indeed good practice, two different and inconsistent methodologies have been used over the last few years in the meteorology and hydrology literature to compute the average ensemble spread. In some cases, the square root of average ensemble variance is used, and in other cases, the average of ensemble standard deviation is computed instead. The second option is incorrect. To avoid the perpetuation of practices that are not supported by probability theory, the correct equation for computing the average ensemble spread is obtained and the impact of using the wrong equation is illustrated.
The issues of downscaling the outputs of a global climate model (GCM) to a scale that is appropriate to hydrological impact studies are investigated using a temporal neural network approach. The time-lagged feed-forward neural network (TLFN) is proposed for downscaling daily total precipitation and daily maximum and minimum temperature series for the Serpent River watershed in northern Quebec (Canada). The downscaling models are developed and validated using large-scale predictor variables derived from the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis dataset. Atmospheric predictors such as specific humidity, wind velocity, and geopotential height are identified as the most relevant inputs to the downscaling models. The performance of the TLFN downscaling model is also compared to a statistical downscaling model (SDSM). The downscaling results suggest that the TLFN is an efficient method for downscaling both daily precipitation and temperature series. The best downscaling models were then applied to the outputs of the Canadian Global Climate Model (CGCM1), forced with the Intergovernmental Panel on Climate Change (IPCC) IS92a scenario. Changes in average precipitation between the current and the future scenarios predicted by the TLFN are generally found to be smaller than those predicted by the SDSM model. Furthermore, application of the downscaled data for hydrologic impact analysis in the Serpent River resulted in an overall increasing trend in mean annual flow as well as earlier spring peak flow. The results also demonstrate the emphasis that should be given in identifying the appropriate downscaling tools for impact studies by showing how a future climate scenario downscaled with different downscaling methods could result in significantly different hydrologic impact simulation results for the same watershed.
Abstract. This paper investigates the temporal transposability of hydrological models under contrasted climate conditions and evaluates the added value of using an ensemble of model structures for flow simulation. This is achieved by applying the Differential Split Sample Test procedure to twenty lumped conceptual models on a catchment in the Province of Québec (Canada) and another one in the State of Bavaria (Germany). First, a calibration/validation procedure was applied on four historical non-continuous periods with contrasted climate conditions. Then, model efficiency was quantified individually (for each model) and collectively (for the model ensemble). The individual analysis evaluated model performance and robustness. The ensemble investigation, based on the average of simulated discharges, focused on the twenty-member ensemble and all possible model subsets. Results showed that using a single model may provide hazardous results when the model is to be applied in contrasted conditions. Overall, some models turned out as a good compromise in terms of performance and robustness, but generally not as much as the twenty-model ensemble. Model subsets offered yet improved performance over the twenty-model ensemble, but at the expanse of spatial transposability (i.e. need of site-specific analysis).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.