The brain exhibits complex spatio-temporal patterns of activity. This phenomenon is governed by an interplay between the internal neural dynamics of cortical areas and their connectivity. Uncovering this complex relationship has raised much interest, both for theory and the interpretation of experimental data (e.g., fMRI recordings) using dynamical models. Here we focus on the so-called inverse problem: the inference of network parameters in a cortical model to reproduce empirically observed activity. Although it has received a lot of interest, recovering directed connectivity for large networks has been rather unsuccessful so far. The present study specifically addresses this point for a noise-diffusion network model. We develop a Lyapunov optimization that iteratively tunes the network connectivity in order to reproduce second-order moments of the node activity, or functional connectivity. We show theoretically and numerically that the use of covariances with both zero and non-zero time shifts is the key to infer directed connectivity. The first main theoretical finding is that an accurate estimation of the underlying network connectivity requires that the time shift for covariances is matched with the time constant of the dynamical system. In addition to the network connectivity, we also adjust the intrinsic noise received by each network node. The framework is applied to experimental fMRI data recorded for subjects at rest. Diffusion-weighted MRI data provide an estimate of anatomical connections, which is incorporated to constrain the cortical model. The empirical covariance structure is reproduced faithfully, especially its temporal component (i.e., time-shifted covariances) in addition to the spatial component that is usually the focus of studies. We find that the cortical interactions, referred to as effective connectivity, in the tuned model are not reciprocal. In particular, hubs are either receptors or feeders: they do not exhibit both strong incoming and outgoing connections. Our results sets a quantitative ground to explore the propagation of activity in the cortex.
Spike-timing-dependent plasticity (STDP) determines the evolution of the synaptic weights according to their pre- and post-synaptic activity, which in turn changes the neuronal activity. In this paper, we extend previous studies of input selectivity induced by (STDP) for single neurons to the biologically interesting case of a neuronal network with fixed recurrent connections and plastic connections from external pools of input neurons. We use a theoretical framework based on the Poisson neuron model to analytically describe the network dynamics (firing rates and spike-time correlations) and thus the evolution of the synaptic weights. This framework incorporates the time course of the post-synaptic potentials and synaptic delays. Our analysis focuses on the asymptotic states of a network stimulated by two homogeneous pools of "steady" inputs, namely Poisson spike trains which have fixed firing rates and spike-time correlations. The (STDP) model extends rate-based learning in that it can implement, at the same time, both a stabilization of the individual neuron firing rates and a slower weight specialization depending on the input spike-time correlations. When one input pathway has stronger within-pool correlations, the resulting synaptic dynamics induced by (STDP) are shown to be similar to those arising in the case of a purely feed-forward network: the weights from the more correlated inputs are potentiated at the expense of the remaining input connections.
The dynamics of the learning equation, which describes the evolution of the synaptic weights, is derived in the situation where the network contains recurrent connections. The derivation is carried out for the Poisson neuron model. The spiking-rates of the recurrently connected neurons and their cross-correlations are determined self- consistently as a function of the external synaptic inputs. The solution of the learning equation is illustrated by the analysis of the particular case in which there is no external synaptic input. The general learning equation and the fixed-point structure of its solutions is discussed.
While the plasticity of excitatory synaptic connections in the brain has been widely studied, the plasticity of inhibitory connections is much less understood. Here, we present recent experimental and theoretical findings concerning the rules of spike timing-dependent inhibitory plasticity and their putative network function. This is a summary of a workshop at the COSYNE conference 2012.
Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.
In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.
Our behavior entails a flexible and context-sensitive interplay between brain areas to integrate information according to goal-directed requirements. However, the neural mechanisms governing the entrainment of functionally specialized brain areas remain poorly understood. In particular, the question arises whether observed changes in the regional activity for different cognitive conditions are explained by modifications of the inputs to the brain or its connectivity? We observe that transitions of fMRI activity between areas convey information about the tasks performed by 19 subjects, watching a movie versus a black screen (rest). We use a model-based framework that explains this spatiotemporal functional connectivity pattern by the local variability for 66 cortical regions and the network effective connectivity between them. We find that, among the estimated model parameters, movie viewing affects to a larger extent the local activity, which we interpret as extrinsic changes related to the increased stimulus load. However, detailed changes in the effective connectivity preserve a balance in the propagating activity and select specific pathways such that high-level brain regions integrate visual and auditory information, in particular boosting the communication between the two brain hemispheres. These findings speak to a dynamic coordination underlying the functional integration in the brain.
Part of hippocampal and cortical plasticity is characterized by synaptic modifications that depend on the joint activity of the pre- and post-synaptic neurons. To which extent those changes are determined by the exact timing and the average firing rates is still a matter of debate; this may vary from brain area to brain area, as well as across neuron types. However, it has been robustly observed both in vitro and in vivo that plasticity itself slowly adapts as a function of the dynamical context, a phenomena commonly referred to as metaplasticity. An alternative concept considers the regulation of groups of synapses with an objective at the neuronal level, for example, maintaining a given average firing rate. In that case, the change in the strength of a particular synapse of the group (e.g., due to Hebbian learning) affects others' strengths, which has been coined as heterosynaptic plasticity. Classically, Hebbian synaptic plasticity is paired in neuron network models with such mechanisms in order to stabilize the activity and/or the weight structure. Here, we present an oriented review that brings together various concepts from heterosynaptic plasticity to metaplasticity, and show how they interact with Hebbian-type learning. We focus on approaches that are nowadays used to incorporate those mechanisms to state-of-the-art models of spiking plasticity inspired by experimental observations in the hippocampus and cortex. Making the point that metaplasticity is an ubiquitous mechanism acting on top of classical Hebbian learning and promoting the stability of neural function over multiple timescales, we stress the need for incorporating it as a key element in the framework of plasticity models. Bridging theoretical and experimental results suggests a more functional role for metaplasticity mechanisms than simply stabilizing neural activity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.