Of the various attempts to generalize information theory to multiple variables, the most widely utilized, interaction information, suffers from the problem that it is sometimes negative. Here we reconsider from first principles the general structure of the information that a set of sources provides about a given variable. We begin with a new definition of redundancy as the minimum information that any source provides about each possible outcome of the variable, averaged over all possible outcomes. We then show how this measure of redundancy induces a lattice over sets of sources that clarifies the general structure of multivariate information. Finally, we use this redundancy lattice to propose a definition of partial information atoms that exhaustively decompose the Shannon information in a multivariate system in terms of the redundancy between synergies of subsets of the sources. Unlike interaction information, the atoms of our partial information decomposition are never negative and always support a clear interpretation as informational quantities. Our analysis also demonstrates how the negativity of interaction information can be explained by its confounding of redundancy and synergy.
Notions of embodiment, situatedness, and dynamics are increasingly being debated in cognitive science. However, these debates are often carried out in the absence of concrete examples. In order to build intuition, this paper explores a model agent to illustrate how the perspective and tools of dynamical systems theory can be applied to the analysis of situated, embodied agents capable of minimally cognitive behavior. Specifically, we study a model agent whose "nervous system" was evolved using a genetic algorithm to catch circular objects and to avoid diamond-shaped ones. After characterizing the performance, behavioral strategy and psychophysics of the best-evolved agent, its dynamics are analyzed in some detail at three different levels: (1) the entire coupled brain/body/environment system; (2) the interaction between agent and environment that generates the observed coupled dynamics; (3) the underlying neuronal properties responsible for the agent dynamics. This analysis offers both explanatory insight and testable predictions. The paper concludes with discussions of the overall picture that emerges from this analysis, the challenges this picture poses to traditional notions of representation, and the utility of a research methodology involving the analysis of simpler idealized models of complete brain/body/environment systems.
We would like the behavior of the artificial agents that we construct to be as well-adapted to their environments as natural animals are to theirs. Unfortunately, designing controllers with these properties is a very difficult task. In this article, we demonstrate that continuous-time recurrent neural networks are a viable mechanism for adaptive agent control and that the genetic algorithm can be used to evolve effective neural controllers. A significant advantage of this approach is that one need specify only a measure of an agent's overall performance rather than the precise motor output trajectories by which it is achieved. By manipulating the performance evaluation, one can place selective pressure on the development of controllers with desired properties. Several novel controllers have been evolved, including a chemotaxis controller that switches between different strategies depending on environmental conditions, and a locomotion controller that takes advantage of sensory feedback if available but that can operate in its absence if necessary.
Dynamical neural networks are being increasingly employed in a variety of contexts, including as simple model nervous systems for autonomous agents. For this reason, there is a growing need for a comprehensive understanding of their dynamical properties. Using a combination of elementary analysis and numerical studies, this article begins a systematic examination of the dynamics of continuous-time recurrent neural networks. Specifically, a fairly complete description of the possible dynamical behavior and bifurcations of one- and two-neuron circuits is given, along with a few specific results for larger networks. This analysis provides both qualitative insight and, in many cases, quantitative formulas for predicting the dynamical behavior of particular circuits and how that behavior changes as network parameters are varied. These results demonstrate that even small circuits are capable of a rich variety of dynamical behavior (including chaotic dynamics). An approach to understanding the dynamics of circuits with time-varying inputs is also presented. Finally, based on this analysis, several strategies for focusing evolutionary searches into fruitful regions of network parameter space are suggested.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.