Population models are a powerful mathematical tool to study the dynamics of neuronal networks and to simulate the brain at macroscopic scales. We present a mean-field model capable of quantitatively predicting the temporal dynamics of a network of complex spiking neuronal models, from Integrate-and-Fire to Hodgkin–Huxley, thus linking population models to neurons electrophysiology. This opens a perspective on generating biologically realistic mean-field models from electrophysiological recordings.
Cortical representations of brief, static stimuli become more invariant to identity-preserving transformations along the ventral stream. Likewise, increased invariance along the visual hierarchy should imply greater temporal persistence of temporally structured dynamic stimuli, possibly complemented by temporal broadening of neuronal receptive fields. However, such stimuli could engage adaptive and predictive processes, whose impact on neural coding dynamics is unknown. By probing the rat analog of the ventral stream with movies, we uncovered a hierarchy of temporal scales, with deeper areas encoding visual information more persistently. Furthermore, the impact of intrinsic dynamics on the stability of stimulus representations grew gradually along the hierarchy. A database of recordings from mouse showed similar trends, additionally revealing dependencies on the behavioral state. Overall, these findings show that visual representations become progressively more stable along rodent visual processing hierarchies, with an important contribution provided by intrinsic processing.
Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.
Slow waves (SWs) occur both during natural sleep and anesthesia and are universal across species. Even though electrophysiological recordings have been largely used to characterize brain states, they are limited in the spatial resolution and cannot target specific neuronal population. Recently, large-scale optical imaging techniques coupled with functional indicators overcame these restrictions. Here we combined wide-field fluorescence microscopy and a transgenic mouse model expressing a calcium indicator (GCaMP6f) in excitatory neurons to study SW propagation over the meso-scale under ketamine anesthesia. We developed "de novo" a versatile analysis pipeline to quantify the spatio-temporal propagation of the SWs in the frequency band [0.5, 4] Hz. Moreover, we designed a computational simulator based on a simple theoretical model, which takes into account the statistics of neural activity, the response of fluorescence proteins and the slow waves dynamics. The simulator was capable of synthesizing artificial signals that could reliably reproduce several features of the SWs observed in vivo. Comparison of preliminary experimental and simulated data shows the robustness of the analysis tools and its potential to uncover mechanistic insights of the Slow Wave Activity (SWA).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.