Abstract. Precipitation elasticity of streamflow, e•,, provides a measure of the sensitivity of streamflow to changes in rainfall. Watershed model-based estimates of e•, are shown to be highly sensitive to model structure and calibration error. A Monte Carlo experiment compares a nonparametric estimator of e•, with various watershed model-based approaches. The nonparametric estimator is found to have low bias and is as robust as or more robust than alternate model-based approaches. The nonparametric estimator is used to construct a map of e•, for the United States. Comparisons with 10 detailed climate change studies reveal that the contour map of e•, introduced here provides a validation metric for past and future climate change investigations in the United States. Further investigations reveal that e•, tends to be low for basins with significant snow accumulation and for basins whose moisture and energy inputs are seasonally in phase with one another. The Budyko hypothesis can only explain variations in e•, for very humid basins. IntroductionHundreds, possibly thousands, of studies are now available which document the sensitivity of streamflow to climate for river basins all over the world. Most hydrologic climate sensitivity studies involve calibrating a conceptual deterministic watershed model, and then varying the model's atmospheric inputs, to observe the resulting changes in streamflow. Schaake [1990], Nash and Gleick [1991], and Jeton et al. [1996] have performed this type of study. Another approach is to analytically derive the sensitivity of streamflow in terms of model parameters [Schaake, 1990]. A third approach is to fit multivariate regional hydrologic models using climate and streamflow data for many basins in a region [Vogel et al., 1999]. A fourth approach is to empirically estimate changes in streamflow which resulted from historical changes in climate IRisbey and Entekhabi, 1996]. A fifth approach is to use multivariate statistical methods to estimate the relationship between climate and streamflow at a single site [Revelle and Waggoner, 1983]. Of all these approaches the use of conceptual deterministic watershed models is by far the most common because such models are able to model the complex spatial and temporal variations in evapotranspiration, soil moisture, groundwater, and streamflow. Leavesley [1994] provides a more detailed discussion of the advantages of conceptual watershed models for modeling climate change impacts.In spite of the advantages of using conceptual watershed models in climate change studies their validation still remains a fundamental challenge. Climate sensitivity analyses performed on the same basin using different conceptual watershed models can lead to significantly different results. Worse yet, climate sensitivity analyses performed on the same basin using identical conceptual watershed models can lead to remarkably different results. For example, Nash and Gleick [1991] and Schaake [1990] used the National Weather Service River Forecasting System (NWSRFS) to perf...
Many investigators have sought to develop regional multivariate regression models which relate low-flow statistics to watershed characteristics. Normally, a multiplicative model structure is imposed and multivariate statistical procedures are employed to select suitable watershed characteristics and to estimate model parameters. Since such procedures have met with only limited success, we take a different approach. A simple conceptual stream-aquifer model is extended to a watershed scale and evaluated for its ability to approximate the low-flow behavior of 23 unregulated catchments in Massachusetts. The conceptual watershed model is then adapted to estimate low-flow statistics using multivariate regional regression procedures. Our results indicate that in central western Massachusetts, low-flow statistics are highly correlated with the product of watershed area, average basin slope and base flow recession constant, with the base flow recession constant acting as a surrogate for both basin hydraulic conductivity and drainable soil porosity. INTRODUCTION Estimates of low-flow statistics are needed in water quality management and water supply planning and for the determination of minimum downstream release requirements from hydropower, irrigation, water supply, cooling plant and other facilities. Water quality management applications of low-flow statistics include the determination of wasteload allocations, discharge permits, and the siting of treatment plants and sanitary landfills. Many investigations have attempted to develop regional hydrologic models for the purpose of estimating low-flow statistics at ungaged sites from readily available geomorphic, geologic, climatic and topographic parameters. For example, Thomas and Cervione [1970], Tasker [1972], Parker [1977], Dingman [1978], Male and Ogawa [1982], Cervione et al. [1982], Downer [1983], Fennessey and Vogel [1990], and Vogel and Kroll [!990] have developed regional low-flow models in the New England region. Usually such models take the form Qd, T bo J•b•Yb2Yb3 ' ' ' = 1 •2 •'•3 (1) where Qa, r is the d-day, T-year low-flow statistic obtained from gaged flow records, the Xi are measurable drainage basin characteristics and the b i are parameter estimates obtained from multivariate regression procedures. Such models are generally developed using long-term streamflow data and associated basin characteristics from many sites. Regional statistical models of this type, frequently referred to as "state equations," are used widely in the United States for estimating flood flow statistics at ungaged sites. Newton and Herrin [1983] recommend such statistically based regional regression equations over the use of deterministic watershed models for estimating flood flows at ungaged Copyright 1992 by the American Geophysical Union. Paper number 92WR01007. 0043-1397/92/92 WR-01007 $05.00 sites. Their recommendations are based upon a large nationwide comparison of alternative methods for estimating flood flows at ungaged sites developed by several federal agencies. Unfort...
Vogel, Richard M., Chad Yaindl, and Meghan Walter, 2011. Nonstationarity: Flood Magnification and Recurrence Reduction Factors in the United States. Journal of the American Water Resources Association (JAWRA) 47(3):464‐474. DOI: 10.1111/j.1752‐1688.2011.00541.x Abstract: It may no longer be reasonable to model streamflow as a stationary process, yet nearly all existing water resource planning methods assume that historical streamflows will remain unchanged in the future. In the few instances when trends in extreme events have been considered, most recent work has focused on the influence of climate change, alone. This study takes a different approach by exploring trends in floods in watersheds which are subject to a very broad range of anthropogenic influences, not limited to climate change. A simple statistical model is developed which can both mimic observed flood trends as well as the frequency of floods in a nonstationary world. This model is used to explore a range of flood planning issues in a nonstationary world. A decadal flood magnification factor is defined as the ratio of the T‐year flood in a decade to the T‐year flood today. Using historical flood data across the United States we obtain flood magnification factors in excess of 2‐5 for many regions of the United States, particularly those regions with higher population densities. Similarly, we compute recurrence reduction factors which indicate that what is now considered the 100‐year flood, may become much more common in many watersheds. Nonstationarity in floods can result from a variety of anthropogenic processes including changes in land use, climate, and water use, with likely interactions among those processes making it very difficult to attribute trends to a particular cause.
[1] Recent research documents that the widely accepted generalized likelihood uncertainty estimation (GLUE) method for describing forecasting precision and the impact of parameter uncertainty in rainfall/runoff watershed models fails to achieve the intended purpose when used with an informal likelihood measure. In particular, GLUE generally fails to produce intervals that capture the precision of estimated parameters, and the difference between predictions and future observations. This paper illustrates these problems with GLUE using a simple linear rainfall/runoff model so that model calibration is a linear regression problem for which exact expressions for prediction precision and parameter uncertainty are well known and understood. The simple regression example enables us to clearly and simply illustrate GLUE deficiencies. Beven and others have suggested that the choice of the likelihood measure used in a GLUE computation is subjective and may be selected to reflect the goals of the modeler. If an arbitrary likelihood is adopted that does not reasonably reflect the sampling distribution of the model errors, then GLUE generates arbitrary results without statistical validity that should not be used in scientific work. The traditional subjective likelihood measures that have been used with GLUE also fail to reflect the nonnormality, heteroscedasticity, and serial correlation among the residual errors generally found in real problems, and hence are poor metrics for even simple sensitivity analyses and model calibration. Most previous applications of GLUE only produce uncertainty intervals for the average model prediction, which by construction should not be expected to include future observations with the prescribed probability. We show how the GLUE methodology when properly implemented with a statistically valid likelihood function can provide prediction intervals for future observations which will agree with widely accepted and statistically valid analyses.
It is well known that product moment ratio estimators of the coefficient of variation Cν, skewness γ, and kurtosis κ exhibit substantial bias and variance for the small (n ≤ 100) samples normally encountered in hydrologic applications. Consequently, L moment ratio estimators, termed L coefficient of variation τ2, L skewness τ3, and L kurtosis τ4 are now advocated because they are nearly unbiased for all underlying distributions. The advantages of L moment ratio estimators over product moment ratio estimators are not limited to small samples. Monte Carlo experiments reveal that product moment estimators of Cν and γ are also remarkably biased for extremely large samples (n ≥ 1000) from highly skewed distributions. A case study using large samples (n ≥ 5000) of average daily streamflow in Massachusetts reveals that conventional moment diagrams based on estimates of product moments Cν, γ, and κ reveal almost no information about the distributional properties of daily streamflow, whereas L moment diagrams based on estimators of τ2, τ3, and τ4 enabled us to discriminate among alternate distributional hypotheses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.