The 'free energy principle' (FEP) has been suggested to provide a unified theory of the brain, integrating data and theory relating to action, perception, and learning. The theory and implementation of the FEP combines insights from Helmholtzian 'perception as inference', machine learning theory, and statistical thermodynamics. Here, we provide a detailed mathematical evaluation of a suggested biologically plausible implementation of the FEP that has been widely used to develop the theory. Our objectives are (i) to describe within a single article the mathematical structure of this implementation of the FEP; (ii) provide a simple but complete agent-based model utilising the FEP; (iii) disclose the assumption structure of this implementation of the FEP to help elucidate its significance for the brain sciences. be shown that minimising IFE makes the R-density a good approximation to posterior density of environmental variables given sensory data. Under this interpretation the surprisal term in the IFE becomes more akin to the negative of log model evidence defined in more standard implementations of variational Bayes [30].130 The Action-Perception CycleMinimising IFE by updating the R-density provides an upper-bound on surprisal but cannot minimise it directly. The FEP suggests that organisms also act on their environment to change sensory input, and thus minimise surprisal indirectly [1,2]. The mechanism underlying this process is formally symmet-135 ric to perceptual inference, i.e., rather than inferring the cause of sensory data an organism must infer actions that best make sensory data accord with an internal environmental model [9]. Thus, the mechanism is often referred to as
We describe the content and outcomes of the First Workshop on Open-Ended Evolution: Recent Progress and Future Milestones (OEE1), held during the ECAL 2015 conference at the University of York, UK, in July 2015. We briefly summarize the content of the workshopʼs talks, and identify the main themes that emerged from the open discussions. Two important conclusions from the discussions are: (1) the idea of pluralism about OEE-it seems clear that there is more than one interesting and important kind of OEE; and (2) the importance of distinguishing observable behavioral hallmarks of systems undergoing OEE from hypothesized underlying mechanisms that explain why a system exhibits those hallmarks. We summarize the different hallmarks and mechanisms discussed during the workshop, and list the specific systems that were highlighted with respect to particular hallmarks and mechanisms. We conclude by identifying some of the most important open research questions about OEE that are apparent in light of the discussions. The York workshop provides a foundation for a follow-up OEE2 workshop taking place at the ALIFE XV conference in Cancún, Mexico, in July 2016. Additional materials from the York workshop, including talk abstracts, presentation slides, and videos of each talk, are available at http://alife.org/ws/oee1.
Organisms that can learn about their environment and modify their behaviour appropriately during their lifetime are more likely to survive and reproduce than organisms that do not. While associative learning – the ability to detect correlated features of the environment – has been studied extensively in nervous systems, where the underlying mechanisms are reasonably well understood, mechanisms within single cells that could allow associative learning have received little attention. Here, using in silico evolution of chemical networks, we show that there exists a diversity of remarkably simple and plausible chemical solutions to the associative learning problem, the simplest of which uses only one core chemical reaction. We then asked to what extent a linear combination of chemical concentrations in the network could approximate the ideal Bayesian posterior of an environment given the stimulus history so far? This Bayesian analysis revealed the ‘memory traces’ of the chemical network. The implication of this paper is that there is little reason to believe that a lack of suitable phenotypic variation would prevent associative learning from evolving in cell signalling, metabolic, gene regulatory, or a mixture of these networks in cells.
We present a novel formal interpretation of dynamical hierarchies based on information theory, in which each level is a near-state-determined system, and levels are related to one another in a partial ordering. This reformulation moves away from previous definitions, which have considered unique hierarchies of structures or objects arranged in aggregates. Instead, we consider hierarchies of dynamical systems: these are more suited to describing living systems, which are not mere aggregates, but organizations. Transformations from lower to higher levels in a hierarchy are redescriptions that lose information. There are two criteria for partial ordering. One is a state-dependence criterion enforcing predictability within a level. The second is a distinctness criterion enforcing the idea that the higher-level description must do more than just throw information away. We hope this will be a useful tool for empirical studies of both computational and physical dynamical hierarchies.
Life on Earth must originally have arisen from abiotic chemistry. Since the details of this chemistry are unknown, we wish to understand, in general, which types of chemistry can lead to complex, lifelike behavior. Here we show that even very simple chemistries in the thermodynamically reversible regime can self-organize to form complex autocatalytic cycles, with the catalytic effects emerging from the network structure. We demonstrate this with a very simple but thermodynamically reasonable artificial chemistry model. By suppressing the direct reaction from reactants to products, we obtain the simplest kind of autocatalytic cycle, resulting in exponential growth. When these simple first-order cycles are prevented from forming, the system achieves superexponential growth through more complex, higher-order autocatalytic cycles. This leads to nonlinear phenomena such as oscillations and bistability, the latter of which is of particular interest regarding the origins of life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.