Abstract. Extreme hydro-meteorological events have become the focus of more and more studies in the last decade. Due to the complexity of the spatial pattern of changes in precipitation processes, it is still hard to establish a clear view of how precipitation has changed and how it will change in the future. In the present study, changes in extreme precipitation and streamflow processes in the Dongjiang River Basin in southern China are investigated with several nonparametric methods, including one method (Mann-Kendall test) for detecting trend, and three methods (Kolmogorov-Smirnov test, Levene's test and quantile test) for detecting changes in probability distribution. It was shown that little change is observed in annual extreme precipitation in terms of various indices, but some significant changes are found in the precipitation processes on a monthly basis, which indicates that when detecting climate changes, besides annual indices, seasonal variations in extreme events should be considered as well. Despite of little change in annual extreme precipitation series, significant changes are detected in several annual extreme flood flow and low-flow series, mainly at the stations along the main channel of Dongjiang River, which are affected significantly by the operation of several major reservoirs. To assess the reliability of the results, the power of three non-parametric methods are assessed by Monte Carlo simulation. The simulation results show that, while all three methods work well for detecting changes in two groups of data with large sample size (e.g., over 200 points in each group) and large differences in distribution parameters (e.g., over 100% increase of scale parameter in Gamma distribution), none of them are powerful enough for small data sets (e.g., less than 100 points) and small distribution parameter difference (e.g., 50% increase of scale parameter in Gamma distribution). The result of the present study raises the concern of the robustness of statistical change-detection methCorrespondence to: W. Wang (w.wang@126.com) ods, shows the necessity of combined use of different methods including both exploratory and quantitative statistical methods, and emphasizes the need of physically sound explanation when applying statistical test methods for detecting changes.
Abstract. Conventional streamflow models operate under the assumption of constant variance or season-dependent variances (e.g. ARMA (AutoRegressive Moving Average) models for deseasonalized streamflow series and PARMA (Periodic AutoRegressive Moving Average) models for seasonal streamflow series). However, with McLeod-Li test and Engle's Lagrange Multiplier test, clear evidences are found for the existence of autoregressive conditional heteroskedasticity (i.e. the ARCH (AutoRegressive Conditional Heteroskedasticity) effect), a nonlinear phenomenon of the variance behaviour, in the residual series from linear models fitted to daily and monthly streamflow processes of the upper Yellow River, China. It is shown that the major cause of the ARCH effect is the seasonal variation in variance of the residual series. However, while the seasonal variation in variance can fully explain the ARCH effect for monthly streamflow, it is only a partial explanation for daily flow. It is also shown that while the periodic autoregressive moving average model is adequate in modelling monthly flows, no model is adequate in modelling daily streamflow processes because none of the conventional time series models takes the seasonal variation in variance, as well as the ARCH effect in the residuals, into account. Therefore, an ARMA-GARCH (Generalized AutoRegressive Conditional Heteroskedasticity) error model is proposed to capture the ARCH effect present in daily streamflow series, as well as to preserve seasonal variation in variance in the residuals. The ARMA-GARCH error model combines an ARMA model for modelling the mean behaviour and a GARCH model for modelling the variance behaviour of the residuals from the ARMA model. Since the GARCH model is not followed widely in statistical hydrology, the work can be a useful adCorrespondence to: W. Wang (w.wang@126.com) dition in terms of statistical modelling of daily streamflow processes for the hydrological community.
Abstract. The Lo's modified rescaled adjusted range test (R/S test) (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and two approximate maximum likelihood estimation methods, i.e., Whittle's estimator (W-MLE) and another one implemented in S-Plus (S-MLE) based on the algorithm of Haslett and Raftery (1989) are evaluated through intensive Monte Carlo simulations for detecting the existence of longmemory. It is shown that it is difficult to find an appropriate lag q for Lo's test for different short-memory autoregressive (AR) and fractionally integrated autoregressive and moving average (ARFIMA) processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where a strong short-range dependence exists (e.g., AR(1) processes with φ=0.95 or even 0.99), the GPH test gets useless, even for time series of large data size. On the other hand, the estimates of dgiven by S-MLE and W-MLE seem to give a good indication of whether or not the long-memory is present. The simulation results show that data size has a significant impact on the power of all the four methods because the availability of larger samples allows one to inspect the asymptotical properties better. Generally, the power of Lo's test and GPH test increases with increasing data size, and the estimates of d with GPH method, S-MLE method and W-MLE method converge with increasing data size. If no large enough data set is available, we should be aware of the possible bias of the estimates.The four methods are applied to daily average discharge series recorded at 31 gauging stations with different drainage areas in eight river basins in Europe, Canada and USA to detect the existence of long-memory. The results show that the presence of long-memory in 29 daily series is confirmed by at least three methods, whereas the other two series are indicated to be long-memory processes with two methods. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.
This paper provides a comparison between four methodologies that assist criminal investigators in homicide investigations. The Person of Interest Priority Assessment Tool, Trace Investigate and Evaluate, Rasterfahndung, and Analysis of Competing Hypotheses are compared on their performance in the collection, prioritization, and elimination phase of homicide cases in today’s digital era. Three recent Dutch homicide cases are used. The use of categories during collection can assist criminal investigators in the early inclusion of the perpetrator into the investigation, however, in this digital era, the number of persons of interest becomes too large to humanly handle. All four methodologies use techniques to assign weight to pieces of evidence; further research is required to evaluate the effectiveness of these techniques when the amount of pieces of evidence explodes. The use of pre-set elimination categories shows the least promising result leaving most persons of interest not-eliminated by the currently used methodologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.