Sequences of precisely timed neuronal activity are observed in many brain areas in various species. Synfire chains are a well-established model that can explain such sequences. However, it is unknown under which conditions synfire chains can develop in initially unstructured networks by self-organization. This work shows that with spike-timing dependent plasticity (STDP), modulated by global population activity, long synfire chains emerge in sparse random networks. The learning rule fosters neurons to participate multiple times in the chain or in multiple chains. Such reuse of neurons has been experimentally observed and is necessary for high capacity. Sparse networks prevent the chains from being short and cyclic and show that the formation of specific synapses is not essential for chain formation. Analysis of the learning rule in a simple network of binary threshold neurons reveals the asymptotically optimal length of the emerging chains. The theoretical results generalize to simulated networks of conductance-based leaky integrate-and-fire (LIF) neurons. As an application of the emerged chain, we propose a one-shot memory for sequences of precisely timed neuronal activity.
We study a variant of the classical bootstrap percolation process on Erdős Rényi random graphs. The graphs we consider have inhibitory vertices obstructing the diffusion of activity and excitatory vertices facilitating it. We study both a synchronous and an asynchronous version of the process. Both begin with a small initial set of active vertices, and the activation spreads to all vertices for which the number of excitatory active neighbors exceeds the number of inhibitory active neighbors by a certain amount. We show that in the synchronous process, inhibitory vertices may cause unstable behavior: tiny changes in the size of the starting set can dramatically influence the size of the final active set. We further show that in the asynchronous model the process becomes stable and stops with an active set containing a nontrivial deterministic constant fraction of all vertices. Moreover, we show that percolation occurs significantly faster asynchronously than synchronously.
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.
No abstract
Fast bidirectional replays of place cell activity reflecting previous paths, and stripped off any instantial specifics of the animal's locomotion such as its speed or the duration of stops, have been observed during rest in rodents. Mechanisms underlying replays are not fully understood, as previous models depend on assumptions about the path, and on instantial specifics of motion. Relying on sharp-wave events, dendritic spikes and cholinergic modulation, we propose a spiking network model that stores traversed paths on a behavioral timescale with single exposure and produces fast bidirectional replays of corresponding place cell sequences independent of instantial specifics and the path taken. With the model, we make an experimentally verifiable prediction, the sequence cell population, whose firing follows a predefined sequential activity pattern independent of the environment. Furthermore, we hypothesize a functional role for disinhibition as behavioral time pacemaker, enforcing progression of sequence cell activity to match place sequences traversed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.