The tools of dynamical systems theory are having an increasing impact on our understanding of patterns of neural activity. In this tutorial chapter we describe how to build tractable tissue level models that maintain a strong link with biophysical reality. These models typically take the form of nonlinear integrodifferential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of spatiotemporal patterns, based around natural extensions of those used for local differential equation models. We present an overview of these techniques, covering Turing instability analysis, amplitude equations, and travelling waves. Finally we address inverse problems for neural fields to train synaptic weight kernels from prescribed field dynamics.
Event-related brain potentials (ERP) are important neural correlates of cognitive processes. In the domain of language processing, the N400 and P600 reflect lexicalsemantic integration and syntactic processing problems, respectively. We suggest an interpretation of these markers in terms of dynamical system theory and present two nonlinear dynamical models for syntactic computations where different processing strategies correspond to functionally different regions in the system's phase space.
The first goal of this work is to study solvability of the neural field equationwhich is an integro-differential equation in m+1 dimensions. In particular, we show the existence of global solutions for smooth activation functions f with values in [0, 1] and L 1 kernels w via the Banach fixpoint theorem. For a Heaviside type activation function f we show that the above approach fails. However, with slightly more regularity on the kernel function w (we use Hölder continuity with respect to the argument x) we can employ compactness arguments, integral equation techniques and the results for smooth nonlinearity functions to obtain a global existence result in a weaker space.Finally, general estimates on the speed and durability of waves are derived. We show that compactly supported waves with directed kernels (i.e. w(x, y) ≤ 0 for x ≤ y) decay exponentially after a finite time and that the field has a well defined finite speed.
We apply symbolic dynamics techniques such as word statistics and measures of complexity to nonstationary and noisy multivariate time series of electroencephalograms (EEG) in order to estimate event-related brain potentials (ERP). Their significance against surrogate data as well as between different experimental conditions is tested. These methods are validated by simulations using stochastic dynamical systems with time-dependent control parameters and compared with traditional ERP-analysis techniques. Continuous EEG data are cut into epochs according to stimuli events presented to the subjects. These ensembles of time series can be considered as ensembles of trajectories given by some dynamical systems. We employ a statistical mechanics approach motivated by the Frobenius-Perron equation and apply it to coarse-grained symbolic descriptions of the dynamics. We develop time-dependent measures of complexity founded on running cylinder sets and show that these quantities are able to distinguish simulated data obtained by different control parameters as well as experimental data between different experimental conditions. As a first finding, our approach restores the well-known ERP components and it reveals additionally qualitative changes in the EEG that cannot be detected by means of the traditional techniques. We criticize the prerequisites of the traditional approach to ERP analysis and propose to consider ERP instead in terms of dynamical system theory and information theory.
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
We study inverse problems in neural field theory, i.e., the construction of synaptic weight kernels yielding a prescribed neural field dynamics. We address the issues of existence, uniqueness, and stability of solutions to the inverse problem for the Amari neural field equation as a special case, and prove that these problems are generally ill-posed. In order to construct solutions to the inverse problem, we first recast the Amari equation into a linear perceptron equation in an infinitedimensional Banach or Hilbert space. In a second step, we construct sets of biorthogonal function systems allowing the approximation of synaptic weight kernels by a generalized Hebbian learning rule. Numerically, this construction is implemented by the Moore-Penrose pseudoinverse method. We demonstrate the instability of these solutions and use the Tikhonov regularization method for stabilization and to prevent numerical overfitting. We illustrate the stable construction of kernels by means of three instructive examples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.