Long-range power-law correlations have been reported recently for DNA sequences containing noncoding regions. We address the question of whether such correlations may be a trivial consequence of the known mosaic structure ("patchiness") of DNA. We analyze two classes of controls consisting of patchy nucleotide sequences generated by different algorithms--one without and one with long-range power-law correlations. Although both types of sequences are highly heterogenous, they are quantitatively distinguishable by an alternative fluctuation analysis method that differentiates local patchiness from long-range correlations. Application of this analysis to selected DNA sequences demonstrates that patchiness is not sufficient to account for long-range correlation properties.
The healthy heartbeat is traditionally thought to be regulated according to the classical principle of homeostasis whereby physiologic systems operate to reduce variability and achieve an equilibrium-like state [Physiol. Rev. 9, 399-431 (1929)]. However, recent studies [Phys. Rev. Lett. 70, 1343-1346 (1993); Fractals in Biology and Medicine (Birkhauser-Verlag, Basel, 1994), pp. 55-65] reveal that under normal conditions, beat-to-beat fluctuations in heart rate display the kind of long-range correlations typically exhibited by dynamical systems far from equilibrium [Phys. Rev. Lett. 59, 381-384 (1987)]. In contrast, heart rate time series from patients with severe congestive heart failure show a breakdown of this long-range correlation behavior. We describe a new method--detrended fluctuation analysis (DFA)--for quantifying this correlation property in non-stationary physiological time series. Application of this technique shows evidence for a crossover phenomenon associated with a change in short and long-range scaling exponents. This method may be of use in distinguishing healthy from pathologic data sets based on differences in these scaling properties.
According to classical concepts of physiologic control, healthy systems are self-regulated to reduce variability and maintain physiologic constancy. Contrary to the predictions of homeostasis, however, the output of a wide variety of systems, such as the normal human heartbeat, fluctuates in a complex manner, even under resting conditions. Scaling techniques adapted from statistical physics reveal the presence of long-range, power-law correlations, as part of multifractal cascades operating over a wide range of time scales. These scaling properties suggest that the nonlinear regulatory systems are operating far from equilibrium, and that maintaining constancy is not the goal of physiologic control. In contrast, for subjects at high risk of sudden death (including those with heart failure), fractal organization, along with certain nonlinear interactions, breaks down. Application of fractal analysis may provide new approaches to assessing cardiac risk and forecasting sudden cardiac death, as well as to monitoring the aging process. Similar approaches show promise in assessing other regulatory systems, such as human gait control in health and disease. Elucidating the fractal and nonlinear mechanisms involved in physiologic control and complex signaling networks is emerging as a major challenge in the postgenomic era.A hallmark of physiologic systems is their extraordinary complexity. The nonstationarity and nonlinearity of signals ( Fig. 1) generated by living organisms defy traditional mechanistic approaches based on homeostasis and conventional biostatistical methodologies. Recognition that physiologic time series contain ''hidden information'' has fueled growing interest in applying concepts and techniques from statistical physics, including chaos theory, to a wide range of biomedical problems from molecular to organismic levels (1, 2).This presentation describes one area of investigation that has engaged our collaborative attention, namely, fractal analysis of physiologic time series in health and disease. The discussion will focus primarily on certain features of the human heartbeat, one important example of complex physiologic fluctuations. The dynamics of another physiologic control system-human gait-is also briefly discussed. Recognizing that this topic represents only one selected aspect of the broad and rapidly expanding applications of complexity theory to biomedicine (Table 1), readers are referred to a number of useful reviews and references therein (1, 3-10).A motivating problem for our work is depicted in Fig. 1, which presents a dynamical self-test. Shown are 30-min heart rate time series from four subjects. Only one is from a healthy individual; the other three are from patients with life-threatening forms of heart disease. The problem is to identify the normal record. The (perhaps nonintuitive) answer to this ''test'' is given in the figure caption. Beyond its obvious diagnostic import, the problem of classifying temporal assays of integrated cardiac physiology has implications for understanding a...
DNA sequences have been analysed using models, such as an n-step Markov chain, that incorporate the possibility of short-range nucleotide correlations. We propose here a method for studying the stochastic properties of nucleotide sequences by constructing a 1:1 map of the nucleotide sequence onto a walk, which we term a 'DNA walk'. We then use the mapping to provide a quantitative measure of the correlation between nucleotides over long distances along the DNA chain. Thus we uncover in the nucleotide sequence a remarkably long-range power law correlation that implies a new scale-invariant property of DNA. We find such long-range correlations in intron-containing genes and in nontranscribed regulatory DNA sequences, but not in complementary DNA sequences or intron-less genes.
We find that the successive increments in the cardiac beat-to-beat intervals of healthy subjects display scale-invariant, long-range anticorrelations (up to 10(4) heart beats). Furthermore, we find that the histogram for the heartbeat intervals increments is well described by a Lévy stable distribution. For a group of subjects with severe heart disease, we find that the distribution is unchanged, but the long-range correlations vanish. Therefore, the different scaling behavior in health and disease must relate to the underlying dynamics of the heartbeat.
Determining trend and implementing detrending operations are important steps in data analysis. Yet there is no precise definition of ''trend'' nor any logical algorithm for extracting it. As a result, various ad hoc extrinsic methods have been used to determine trend and to facilitate a detrending operation. In this article, a simple and logical definition of trend is given for any nonlinear and nonstationary time series as an intrinsically determined monotonic function within a certain temporal span (most often that of the data span), or a function in which there can be at most one extremum within that temporal span. Being intrinsic, the method to derive the trend has to be adaptive. This definition of trend also presumes the existence of a natural time scale. All these requirements suggest the Empirical Mode Decomposition (EMD) method as the logical choice of algorithm for extracting various trends from a data set. Once the trend is determined, the corresponding detrending operation can be implemented. With this definition of trend, the variability of the data on various time scales also can be derived naturally. Climate data are used to illustrate the determination of the intrinsic trend and natural variability.Empirical Mode Decomposition ͉ global warming ͉ intrinsic mode function ͉ intrinsic trend ͉ trend time scale T he terms ''trend'' and ''detrending'' frequently are encountered in data analysis. In many applications, such as climatic data analyses, the trend is one of the most critical quantities sought. In other applications, such as in computing the correlation function and in spectral analysis, it is necessary to remove the trend from the data, a procedure known as detrending, lest the result might be overwhelmed by the nonzero mean and the trend terms; therefore, detrending often is a necessary step before meaningful spectral results can be obtained. As a result, identifying the trend and detrending the data are both of great interest and importance in data analysis.Because the concept of a trend in a data set seems clearly self-evident, most data analysts take it for granted and only few bother to examine the essence of it or to define it rigorously for the purpose of data analysis. For example, in statistics and in numerous scientific analyses, the trend often is taken as the tendency over the whole data domain that presumably will continue into the future when new observations become available. In other cases, the trend can be the residue of data after removing the components of the data with frequency higher than a threshold frequency (1). In a casual Internet search, for example, there are presently more than 12 million items related to trend and detrending. However, a rigorous and satisfactory definition of either the trend of nonlinear nonstationary data or the corresponding detrending operation still is lacking, which leads to the awkward reality that the determination of trend and detrending often are ad hoc operations. Because many of the difficulties concerning trend stem from the lack of...
We study the statistical properties of volatility, measured by locally averaging over a time window T, the absolute value of price changes over a short time interval deltat. We analyze the S&P 500 stock index for the 13-year period Jan. 1984 to Dec. 1996. We find that the cumulative distribution of the volatility is consistent with a power-law asymptotic behavior, characterized by an exponent mu approximately 3, similar to what is found for the distribution of price changes. The volatility distribution retains the same functional form for a range of values of T. Further, we study the volatility correlations by using the power spectrum analysis. Both methods support a power law decay of the correlation function and give consistent estimates of the relevant scaling exponents. Also, both methods show the presence of a crossover at approximately 1.5 days. In addition, we extend these results to the volatility of individual companies by analyzing a data base comprising all trades for the largest 500 U.S. companies over the two-year period Jan. 1994 to Dec. 1995.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.