The prediction of complex nonlinear dynamical systems with the help of machine learning techniques has become increasingly popular. In particular, reservoir computing turned out to be a very promising approach especially for the reproduction of the long-term properties of a nonlinear system. Yet, a thorough statistical analysis of the forecast results is missing. Using the Lorenz and Rössler system we statistically analyze the quality of prediction for different parametrizations -both the exact short-term prediction as well as the reproduction of the long-term properties (the "climate") of the system as estimated by the correlation dimension and largest Lyapunov exponent. We find that both short and longterm predictions vary significantly among the realizations. Thus special care must be taken in selecting the good predictions as predictions which deliver better short-term prediction also tend to better resemble the long-term climate of the system. Instead of only using purely random Erdös-Renyi networks we also investigate the benefit of alternative network topologies such as small world or scale-free networks and show which effect they have on the prediction quality. Our results suggest that the overall performance with respect to the reproduction of the climate of both the Lorenz and Rössler system is worst for scale-free networks. For the Lorenz system there seems to be a slight benefit of using small world networks while for the Rössler system small world and Erdös-Renyi networks performed equivalently well. In general the observation is that reservoir computing works for all network topologies investigated here.
Pearson correlation and mutual information-based complex networks of the day-to-day returns of U.S. S&P500 stocks between 1985 and 2015 have been constructed to investigate the mutual dependencies of the stocks and their nature. We show that both networks detect qualitative differences especially during (recent) turbulent market periods, thus indicating strongly fluctuating interconnections between the stocks of different companies in changing economic environments. A measure for the strength of nonlinear dependencies is derived using surrogate data and leads to interesting observations during periods of financial market crises. In contrast to the expectation that dependencies reduce mainly to linear correlations during crises, we show that (at least in the 2008 crisis) nonlinear effects are significantly increasing. It turns out that the concept of centrality within a network could potentially be used as some kind of an early warning indicator for abnormal market behavior as we demonstrate with the example of the 2008 subprime mortgage crisis. Finally, we apply a Markowitz mean variance portfolio optimization and integrate the measure of nonlinear dependencies to scale the investment exposure. This leads to significant outperformance as compared to a fully invested portfolio.
Reservoir computing is a very promising approach for the prediction of complex nonlinear dynamical systems. Besides capturing the exact short-term trajectories of nonlinear systems, it has also proved to reproduce its characteristic long-term properties very accurately. However, predictions do not always work equivalently well. It has been shown that both short-and long-term predictions vary significantly among different random realizations of the reservoir. In order to gain an understanding on when reservoir computing works best, we investigate some differential properties of the respective realization of the reservoir in a systematic way. We find that removing nodes that correspond to the largest weights in the output regression matrix reduces outliers and improves overall prediction quality. Moreover, this allows to effectively reduce the network size and, therefore, increase computational efficiency. In addition, we use a nonlinear scaling factor in the hyperbolic tangent of the activation function. This adjusts the response of the activation function to the range of values of the input variables of the nodes. As a consequence, this reduces the number of outliers significantly and increases both the short-and long-term prediction quality for the nonlinear systems investigated in this study. Our results demonstrate that a large optimization potential lies in the systematical refinement of the differential reservoir properties for a given dataset.
The prediction of complex nonlinear dynamical systems with the help of machine learning has become increasingly popular in different areas of science. In particular, reservoir computers, also known as echo-state networks, turned out to be a very powerful approach, especially for the reproduction of nonlinear systems. The reservoir, the key component of this method, is usually constructed as a sparse, random network that serves as a memory for the system. In this work, we introduce block-diagonal reservoirs, which implies that a reservoir can be composed of multiple smaller reservoirs, each with its own dynamics. Furthermore, we take out the randomness of the reservoir by using matrices of ones for the individual blocks. This breaks with the widespread interpretation of the reservoir as a single network. In the example of the Lorenz and Halvorsen systems, we analyze the performance of block-diagonal reservoirs and their sensitivity to hyperparameters. We find that the performance is comparable to sparse random networks and discuss the implications with regard to scalability, explainability, and hardware realizations of reservoir computers.
Identifying and describing the dynamics of complex systems is a central challenge in various areas of science, such as physics, finance, or climatology. While machine learning algorithms are increasingly overtaking traditional approaches, their inner workings and, thus, the drivers of causality remain elusive. In this paper, we analyze the causal structure of chaotic systems using Fourier transform surrogates and three different inference techniques: While we confirm that Granger causality is exclusively able to detect linear causality, transfer entropy and convergent cross-mapping indicate that causality is determined to a significant extent by nonlinear properties. For the Lorenz and Halvorsen systems, we find that their contribution is independent of the strength of the nonlinear coupling. Furthermore, we show that a simple rationale and calibration algorithm are sufficient to extract the governing equations directly from the causal structure of the data. Finally, we illustrate the applicability of the framework to real-world dynamical systems using financial data before and after the COVID-19 outbreak. It turns out that the pandemic triggered a fundamental rupture in the world economy, which is reflected in the causal structure and the resulting equations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.