Abstract. Grebogi, Ott and Yorke (Phys. Rev. A 38(7), 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T ) of a dynamical system scales as a function of computer precision (ε) and the correlation dimension (d) of the chaotic attractor:In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.
Causality testing methods are being widely used in various disciplines of science. Model-free methods for causality estimation are very useful as the underlying model generating the data is often unknown. However, existing model-free measures assume separability of cause and effect at the level of individual samples of measurements and unlike model-based methods do not perform any intervention to learn causal relationships. These measures can thus only capture causality which is by the associational occurrence of ‘cause’ and ‘effect’ between well separated samples. In real-world processes, often ‘cause’ and ‘effect’ are inherently inseparable or become inseparable in the acquired measurements. We propose a novel measure that uses an adaptive interventional scheme to capture causality which is not merely associational. The scheme is based on characterizing complexities associated with the dynamical evolution of processes on short windows of measurements. The formulated measure, Compression- Complexity Causality is rigorously tested on simulated and real datasets and its performance is compared with that of existing measures such as Granger Causality and Transfer Entropy. The proposed measure is robust to presence of noise, long-term memory, filtering and decimation, low temporal resolution (including aliasing), non-uniform sampling, finite length signals and presence of common driving variables. Our measure outperforms existing state-of-the-art measures, establishing itself as an effective tool for causality testing in real world applications.
Shannon Entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ET C) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ET C outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ET C has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ET C converges to a steady state value faster than LZ. Compression-Complexity Measures are promising for applications which involve short and noisy time series.
Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -a novel chaos based artificial neural network architecture for classification tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic map known as the Generalized Luröth Series (GLS) which has been shown in earlier works to possess very useful properties for compression, cryptography and for computing XOR and other logical operations. In this work, we design a novel learning algorithm on ChaosNet that exploits the topological transitivity property of the chaotic GLS neurons. The proposed learning algorithm gives consistently good performance accuracy in a number of classification tasks on well known publicly available datasets with very limited training samples. Even with as low as 7 (or fewer) training samples/class (which accounts for less than 0.05% of the total available data), ChaosNet yields performance accuracies in the range 73.89% − 98.33%. We demonstrate the robustness of ChaosNet to additive parameter noise and also provide an example implementation of a 2-layer ChaosNet for enhancing classification accuracy. We envisage the development of several other novel learning algorithms on ChaosNet in the near future.Chaos has been empirically found in the brain at several spatio-temporal scales [1,2]. In fact, individual neurons in the brain are known to exhibit chaotic bursting activity and several neuronal models such as the Hindmarsh-Rose neuron model exhibit complex chaotic dynamics [3]. Though Artificial Neural Networks (ANN) such as Recurrent Neural Networks exhibit chaos, to our knowledge, there have been no successful attempts in building an ANN for classification tasks which is entirely comprised of neurons which are individually chaotic. Building on our earlier research, in this work, we propose ChaosNet -an ANN built out of neurons -each of which is a 1D chaotic map known as Generalized Luröth Series (GLS). GLS has been shown to have salient properties such as ability to encode and decode information losslessly with Shannon optimality, computing logical operations (XOR, AND etc.), universal approximation property and ergodicity (mixing) for cryptography applications. In this work, ChaosNet exploits the topological transitivity property of chaotic GLS neurons for classification tasks with state-of-the art accuracies in the low training sample regime. This work, inspired by the chaotic nature of neurons in the brain, demonstrates the unreasonable effectiveness of chaos and its properties for machine learning. It also paves the way for designing and implementing other novel learning algorithms on the ChaosNet architecture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.