[1] Annual peak discharge records from 50 stations in the continental United States with at least 100 years of record are used to investigate stationarity of flood peaks during the 20th century. We examine temporal trends in flood peaks and abrupt changes in the mean and/or variance of flood peak distributions. Change point analysis for detecting abrupt changes in flood distributions is performed using the nonparametric Pettitt test. Two nonparametric (Mann-Kendall and Spearman) tests and one parametric (Pearson) test are used to detect the presence of temporal trends. Generalized additive models for location, scale, and shape (GAMLSS) are also used to parametrically model the annual peak data, exploiting their flexibility to account for abrupt changes and temporal trends in the parameters of the distribution functions. Additionally, the presence of long-term persistence is investigated through estimation of the Hurst exponent, and an alternative interpretation of the results in terms of long-term persistence is provided. Many of the drainage basins represented in this study have been affected by regulation through systems of reservoirs, and all of the drainage basins have experienced significant land use changes during the 20th century. Despite the profound changes that have occurred to drainage basins throughout the continental United States and the recognition that elements of the hydrologic cycle are being altered by human-induced climate change, it is easier to proclaim the demise of stationarity of flood peaks than to prove it through analyses of annual flood peak data.
Keywords:Nonstationary flood frequency analysis Nonstationary return period Risk of failure Nonstationary confidence intervals Generalized linear models Generalized additive models a b s t r a c tThe increasing effort to develop and apply nonstationary models in hydrologic frequency analyses under changing environmental conditions can be frustrated when the additional uncertainty related to the model complexity is accounted for along with the sampling uncertainty. In order to show the practical implications and possible problems of using nonstationary models and provide critical guidelines, in this study we review the main tools developed in this field (such as nonstationary distribution functions, return periods, and risk of failure) highlighting advantages and disadvantages. The discussion is supported by three case studies that revise three illustrative examples reported in the scientific and technical literature referring to the Little Sugar Creek (at Charlotte, North Carolina), Red River of the North (North Dakota/Minnesota), and the Assunpink Creek (at Trenton, New Jersey). The uncertainty of the results is assessed by complementing point estimates with confidence intervals (CIs) and emphasizing critical aspects such as the subjectivity affecting the choice of the models' structure. Our results show that (1) nonstationary frequency analyses should not only be based on at-site time series but require additional information and detailed exploratory data analyses (EDA); (2) as nonstationary models imply that the time-varying model structure holds true for the entire future design life period, an appropriate modeling strategy requires that EDA identifies a well-defined deterministic mechanism leading the examined process; (3) when the model structure cannot be inferred in a deductive manner and nonstationary models are fitted by inductive inference, model structure introduces an additional source of uncertainty so that the resulting nonstationary models can provide no practical enhancement of the credibility and accuracy of the predicted extreme quantiles, whereas possible model misspecification can easily lead to physically inconsistent results; (4) when the model structure is uncertain, stationary models and a suitable assessment of the uncertainty accounting for possible temporal persistence should be retained as more theoretically coherent and reliable options for practical applications in real-world design and management problems; (5) a clear understanding of the actual probabilistic meaning of stationary and nonstationary return periods and risk of failure is required for a correct risk assessment and communication.
The concept of return period in stationary univariate frequency analysis is prone to misconceptions and misuses that are well known but still widespread. In this study we highlight how nonstationary and multivariate extensions of such a concept are affected by additional misconceptions, thus easily resulting in further ill-posed procedures and misleading conclusions. We also show that the concepts of probability of exceedance and risk of failure over a given design life period provide more coherent, general and well devised tools for risk assessment and communication.
[1] This study attempts to reconcile the conflicting results reported in the literature concerning the behavior of peak-over-threshold (POT) daily rainfall extremes and their distribution. By using two worldwide data sets, the impact of threshold selection and record length on the upper tail behavior of POT observations is investigated. The rainfall process is studied within the framework of generalized Pareto (GP) exceedances according to the classical extreme value theory (EVT), with particular attention paid to the study of the GP shape parameter, which controls the heaviness of the upper tail of the GP distribution. A twofold effect is recognized. First, as the threshold decreases, and nonextreme values are progressively incorporated in the POT samples, the variance of the GP shape parameter reduces and the mean converges to positive values denoting a tendency to heavy tail behavior. Simultaneously, the EVT asymptotic hypotheses are less and less realistic, and the GP asymptote tends to be replaced by the Weibull penultimate asymptote whose upper tail is exponential but apparently heavy. Second, for a fixed high threshold, the variance of the GP shape parameter reduces as the record length (number of years) increases, and the mean values tend to be positive, thus denoting again the prevalence of heavy tail behavior. In both cases, i.e., threshold selection and record length effect, the heaviness of the tail may be ascribed to mechanisms such as the blend of extreme and nonextreme values, and fluctuations of the parent distributions. It is shown how these results provide a link between previous studies and pave the way for more comprehensive analyses which merge empirical, theoretical, and operational points of view. This study also provides several ancillary results, such as a set of formulae to correct the bias of the GP shape parameter estimates due to short record lengths accounting for uncertainty, thus avoiding systematic underestimation of extremes which results from the analysis of short time series.Citation: Serinaldi, F., and C. G. Kilsby (2014), Rainfall extremes: Toward reconciliation after the battle of distributions, Water Resour. Res., 50, 336–352, doi:10.1002/2013WR014211.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.