Despite progresses in representing different processes, hydrological models remain uncertain. Their uncertainty stems from input and calibration data, model structure, and parameters. In characterizing these sources, their causes, interactions and different uncertainty analysis (UA) methods are reviewed. The commonly used UA methods are categorized into six broad classes: (i) Monte Carlo analysis, (ii) Bayesian statistics, (iii) multi-objective analysis, (iv) least-squares-based inverse modeling, (v) response-surface-based techniques, and (vi) multi-modeling analysis. For each source of uncertainty, the status-quo and applications of these methods are critiqued in gauged catchments where UA is common and in ungauged catchments where both UA and its review are lacking. Compared to parameter uncertainty, UA application for structural uncertainty is limited while input and calibration data uncertainties are mostly unaccounted. Further research is needed to improve the computational efficiency of UA, disentangle and propagate the different sources of uncertainty, improve UA applications to environmental changes and coupled human–natural-hydrologic systems, and ease UA’s applications for practitioners.
Streamflow forecasts often perform poorly because of improper representation of hydrologic response timescales in underlying models. Here, we use transfer entropy (TE), which measures information flow between variables, to identify dominant drivers of discharge and their timescales using sensor data from the Dry Creek Experimental Watershed, ID, USA. Consistent with previous mechanistic studies, TE revealed that snowpack accumulation and partitioning into melt, recharge, and evaporative loss dominated discharge patterns and that snow‐sourced baseflow reduced the greatest amount of uncertainty in discharge. We hypothesized that machine learning models (MLMs) specified in accordance with the dominant lag timescales, identified via TE, would outperform timescale‐agnostic models. However, while lagged‐variable random forest regressions captured the dominant process—seasonal snowmelt—they ultimately did not perform as well as the unlagged models, provided those models were specified with input data aggregated over a range of timescales. Unlagged models, not constrained by timescales of the dominant processes, more effectively represented variable interactions (e.g., rain‐on‐snow events) playing a critical role in translating precipitation into streamflow over long, intermediate, and short timescales. Meanwhile, long short‐term memory (LSTM) models were effective in internally identifying the key lag and aggregation scales for predicting discharge. Parsimonious specification of LSTM models, using only daily unlagged precipitation and temperature data, produced the highest performing predictions. Our findings suggest that TE can identify dominant streamflow controls and the relative importance of different mechanisms of streamflow generation, useful for establishing process baselines and fingerprinting watersheds. However, restricting MLMs based on dominant timescales undercuts their skill at learning these timescales internally.
In most water resources applications, any particular model structure might be inadequate to capture the dynamic multiscale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over‐correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses were used to identify the presence of multiple dominant processes, and the adequacy of a single model, as well as to develop the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.
How precipitation (P) is translated into streamflow (Q) and over what timescales (i.e., “memory”) is difficult to predict without calibration of site‐specific models or using geochemical approaches, posing barriers to prediction in ungauged basins or advancement of general theories. Here, we used a data‐driven approach to identify regional patterns and exogenous controls on P–Q interactions. We applied an information flow analysis, which quantifies uncertainty reduction, to a daily time series of P and Q from 671 watersheds across the conterminous United States. We first demonstrated that information transfer from P to Q primarily reflects the quickflow component of water‐budgets, based on a watershed model. Readily quantifiable information flows show a functional relationship with model parameters, suggesting utility for model calibration. Second, applied to real watersheds, P–Q information flows exhibit seasonally varying behavior within regions in a manner consistent with dominant runoff generation mechanisms. However, the timing and the magnitude of information flows also reflect considerable subregional heterogeneity, likely attributable to differences in watershed size, baseflow contributions, and variation in aerial coverage of preferential flow paths. A regression analysis showed that a combination of climate and watershed characteristics are predictive of P–Q information flows. Though information flows cannot, in most cases, uniquely determine dominant runoff mechanisms, they provide a means to quantify the heterogeneous outcomes of those mechanisms within regions, thereby serving as a benchmarking tool for models developed at the regional scale. Last, information flows characterize regionally specific ways in which catchment connectivity changes from the wet to dry season.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.