Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
. We investigated spontaneous activity and excitability in large networks of artificial spiking neurons. We compared three different spiking neuron models: integrate-and-fire (IF), regular-spiking (RS), and resonator (RES). First, we show that different models have different frequency-dependent response properties, yielding large differences in excitability. Then, we investigate the responsiveness of these models to a single afferent inhibitory/excitatory spike and calibrate the total synaptic drive such that they would exhibit similar peaks of the postsynaptic potentials (PSP). Based on the synaptic calibration, we build large microcircuits of IF, RS, and RES neurons and show that the resonance property favors homeostasis and self-sustainability of the network activity. On the other hand, integration produces instability while it endows the network with other useful properties, such as responsiveness to external inputs. We also investigate other potential sources of stable self-sustained activity and their relation to the membrane properties of neurons. We conclude that resonance and integration at the neuron level might interact in the brain to promote stability as well as flexibility and responsiveness to external input and that membrane properties, in general, are essential for determining the behavior of large networks of neurons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.