Neuronal networks provide living organisms with the ability to process information. They are also characterized by abundant recurrent connections, which give rise to strong feedback that dictates their dynamics and endows them with fading (short-term) memory. The role of recurrence in long-term memory, on the other hand, is still unclear. Here we use the neuronal network of the roundworm C. elegans to show that recurrent architectures in living organisms can exhibit long-term memory without relying on specific hard-wired modules. A genetic algorithm reveals that the experimentally observed dynamics of the worm's neuronal network exhibits maximal complexity (as measured by permutation entropy). In that complex regime, the response of the system to repeated presentations of a time-varying stimulus reveals a consistent behavior that can be interpreted as soft-wired long-term memory.A common manifestation of our ability to remember the past is the consistence of our responses to repeated presentations of stimuli across time. Complex chaotic dynamics is known to produce such reliable responses in spite of its characteristic sensitive dependence on initial conditions. In neuronal networks, complex behavior is known to result from a combination of (i) recurrent connections and (ii) a balance between excitation and inhibition. Here we show that those features concur in the neuronal network of a living organism, namely C. elegans. This enables long-term memory to arise in an on-line manner, without having to be hard-wired in the brain.
Recurrent neuronal networks are known to be endowed with fading (short-term) memory, whereas long-term memory is usually considered to be hard-wired in the network connectivity via Hebbian learning, for instance. Here we use the neuronal network of the roundworm C. elegans to show that recurrent architectures in living organisms can exhibit long-term memory without relying on specific hard-wired modules. We applied a genetic algorithm, using a binary genome that encodes for inhibitory-excitatory connectivity, to solve the unconstrained optimization problem of fitting the experimentally observed dynamics of the worm's neuronal network. Our results show that the network operates in a complex chaotic regime, as measured by the permutation entropy. In that complex regime, the response of the system to repeated presentations of a time-varying stimulus reveals a consistent behavior that can be interpreted as long-term memory. This memory is softwired, since it does not require structural changes in the network connectivity, but relies only on the system dynamics for encoding.
The ability to switch between tasks effectively in response to external stimuli is a hallmark of cognitive control. Our brain can filter and integrate external information to accomplish goal-directed behavior. Task switching occurs rapidly and efficiently, allowing us to perform multiple tasks with ease. Similarly, artificial neural networks can be tailored to exhibit multi-task capabilities and achieve high performance across domains. In terms of explainability, understanding how neural networks make predictions is crucial for their deployment in many real-world scenarios. In this study, we delve into neural representations learned by task-switching networks, which use task-specific bias for multitasking. Task-specific biases, mediated by context inputs, are learned by alternating the tasks the neural network learns during training. By using the MNIST dataset and binary tasks, we find that task-switching networks produce representations that resemble other multitasking paradigms, namely parallel networks in the early stages of processing and sequential networks in the last stages, respectively. We analyze the importance of inserting task contexts in different stages of processing and its role in aligning the task with relevant features. Moreover, we visualize how networks generalize neural representations during task-switching for different tasks. The use of context inputs improves the interpretability of simple neural networks for multitasking, helping to pave the way for the future study of architectures and tasks of higher complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.