Perceptual experiences may arise from neuronal activity patterns in mammalian neocortex. We probed mouse neocortex during visual discrimination using a red-shifted channelrhodopsin (ChRmine, discovered through structure-guided genome mining) alongside multiplexed multiphoton-holography (MultiSLM), achieving control of individually specified neurons spanning large cortical volumes with millisecond precision. Stimulating a critical number of stimulus-orientation-selective neurons drove widespread recruitment of functionally related neurons, a process enhanced by (but not requiring) orientation-discrimination task learning. Optogenetic targeting of orientation-selective ensembles elicited correct behavioral discrimination. Cortical layer–specific dynamics were apparent, as emergent neuronal activity asymmetrically propagated from layer 2/3 to layer 5, and smaller layer 5 ensembles were as effective as larger layer 2/3 ensembles in eliciting orientation discrimination behavior. Population dynamics emerging after optogenetic stimulation both correctly predicted behavior and resembled natural internal representations of visual stimuli at cellular resolution over volumes of cortex.
Firing patterns in the central nervous system often exhibit strong temporal irregularity and considerable heterogeneity in time-averaged response properties. Previous studies suggested that these properties are the outcome of the intrinsic chaotic dynamics of the neural circuits. Indeed, simplified rate-based neuronal networks with synaptic connections drawn from Gaussian distribution and sigmoidal nonlinearity are known to exhibit chaotic dynamics when the synaptic gain (i.e., connection variance) is sufficiently large. In the limit of an infinitely large network, there is a sharp transition from a fixed point to chaos, as the synaptic gain reaches a critical value. Near the onset, chaotic fluctuations are slow, analogous to the ubiquitous, slow irregular fluctuations observed in the firing rates of many cortical circuits. However, the existence of a transition from a fixed point to chaos in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work, we investigate rate-based dynamics of neuronal circuits composed of several subpopulations with randomly diluted connections. Nonzero connections are either positive for excitatory neurons or negative for inhibitory ones, while single neuron output is strictly positive with output rates rising as a power law above threshold, in line with known constraints in many biological systems. Using dynamic mean field theory, we find the phase diagram depicting the regimes of stable fixed-point, unstable-dynamic, and chaotic-rate fluctuations. We focus on the latter and characterize the properties of systems near this transition. We show that dilute excitatoryinhibitory architectures exhibit the same onset to chaos as the single population with Gaussian connectivity. In these architectures, the large mean excitatory and inhibitory inputs dynamically balance each other, amplifying the effect of the residual fluctuations. Importantly, the existence of a transition to chaos and its critical properties depend on the shape of the single-neuron nonlinear input-output transfer function, near firing threshold. In particular, for nonlinear transfer functions with a sharp rise near threshold, the transition to chaos disappears in the limit of a large network; instead, the system exhibits chaotic fluctuations even for small synaptic gain. Finally, we investigate transition to chaos in network models with spiking dynamics. We show that when synaptic time constants are slow relative to the mean inverse firing rates, the network undergoes a transition from fast spiking fluctuations with constant rates to a state where the firing rates exhibit chaotic fluctuations, similar to the transition predicted by rate-based dynamics. Systems with finite synaptic time constants and firing rates exhibit a smooth transition from a regime dominated by stationary firing rates to a regime of slow rate fluctuations. This smooth crossover obeys scaling properties, similar to crossover phenomena in statistical mechanics. The theoretical results are sup...
Graphical Abstract Highlights d First simultaneous recordings from neocortex and cerebellum over weeks of learning d Cortical layer 5 and cerebellar granule cells show similar task encoding in experts d Learning increases correlations among initially dissimilar L5 and granule cells d L5 and granule cells converge to similar, low-dimensional, task-encoding activity Correspondence mjwagner@stanford.edu (M.J.W.), mschnitz@stanford.edu (M.J.S.), lluo@stanford.edu (L.L.) In BriefSimultaneous recordings of ensembles of individual neurons in the neocortex and cerebellum provide a view of how these two brain regions learn together. SUMMARYThroughout mammalian neocortex, layer 5 pyramidal (L5) cells project via the pons to a vast number of cerebellar granule cells (GrCs), forming a fundamental pathway. Yet, it is unknown how neuronal dynamics are transformed through the L5/GrC pathway. Here, by directly comparing premotor L5 and GrC activity during a forelimb movement task using dual-site two-photon Ca 2+ imaging, we found that in expert mice, L5 and GrC dynamics were highly similar. L5 cells and GrCs shared a common set of task-encoding activity patterns, possessed similar diversity of responses, and exhibited high correlations comparable to local correlations among L5 cells. Chronic imaging revealed that these dynamics co-emerged in cortex and cerebellum over learning: as behavioral performance improved, initially dissimilar L5 cells and GrCs converged onto a shared, lowdimensional, task-encoding set of neural activity patterns. Thus, a key function of cortico-cerebellar communication is the propagation of shared dynamics that emerge during learning.(A) Experimental schematics. Mice voluntarily moved a manipulandum for sucrose water reward (left). We performed simultaneous Ca 2+ imaging in cerebellar GrCs through a cranial window, and in L5 pyramidal neurons of the premotor cortex using an implanted 1 mm prism (right). GCaMP6f was expressed in L5 cells and GrCs using quadruple transgenic mice Rbp4-Cre/Math1-Cre/Ai93/ztTA. (B) Mean images from representative two-photon Ca 2+ imaging movies in L5 cells (left) and GrCs (right). The spatial filters used to extract fluorescence traces from cells with detected activity are highlighted in grayscale or red/blue (see G below; n = 144 L5 cells/177 GrCs). (C) Forelimb movement task. Water-restricted mice self-initiated trials. The task alternated blocks of 40 trials in which forward movement was followed by a left turn with blocks of 40 trials in which forward movement was followed by a right turn. No cues indicated trial type. (D) Example movements on the virtual right-angle track (left, n = 20 each of pure left and right turns; right, n = 8 error-correction turns in each direction). (E) Average motion over time in forward (black curve) and lateral (colored curves) directions for all pure turn trials in the session in (D), aligned temporally to turn onset (n = 51/63 pure-left/pure-right turns). Dashed vertical line denotes average forward movement onset. (F) Behavioral performanc...
The recent striking success of deep neural networks in machine learning raises profound questions about the theoretical principles underlying their success. For example, what can such deep networks compute? How can we train them? How does information propagate through them? Why can they generalize? And how can we teach them to imagine? We review recent work in which methods of physical analysis rooted in statistical mechanics have begun to provide conceptual insights into these questions. These insights yield connections between deep learning and diverse physical and mathematical topics, including random landscapes, spin glasses, jamming, dynamical phase transitions, chaos, Riemannian geometry, random matrix theory, free probability, and nonequilibrium statistical mechanics. Indeed, the fields of statistical mechanics and machine learning have long enjoyed a rich history of strongly coupled interactions, and recent advances at the intersection of statistical mechanics and deep learning suggest these interactions will only deepen going forward.
The entry of substrate into the active site is the first event in any enzymatic reaction. However, due to the short time interval between the encounter and the formation of the stable complex, the detailed steps are experimentally unobserved. In the present study, we report a molecular dynamics simulation of the encounter between palmitate molecule and the Toad Liver fatty acid binding protein, ending with the formation of a stable complex resemblance in structure of other proteins of this family. The forces operating on the system leading to the formation of the tight complex are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.