4The brain represents and reasons probabilistically about complex stimuli and motor actions 5 using a noisy, spike-based neural code. A key building block for such neural computations, as 6 well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise 7 or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical 8 modeling of neural responses and deep learning, current approaches either do not scale to large 9 neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by 10 the sparse and random connectivity of real neuronal circuits, we present a new model for neural 11 codes that accurately estimates the likelihood of individual spiking patterns and has a straightfor-12 ward, scalable, efficiently learnable, and realistic neural implementation. This model's performance 13 on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal 14 cortices is comparable or better than that of current models. Importantly, the model can be learned 15 using a small number of samples, and using a local learning rule that utilizes noise intrinsic to neu-16 ral circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning 17 processes, further improve the efficiency and sparseness of the resulting neural representations. 18 Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to 19 suggest random sparse connectivity as a key design principle for neuronal computation. 20 The majority of neurons in the central nervous system know about the external world only by observ-21 ing the activity of other neurons. Neural circuits must therefore learn to represent information and 22 33 an architecture designed for a particular task will typically not support other computations, as done 34 *Co-corresponding authors in the brain. Lastly, top-down models relate to neural data on a qualitative level, falling short of 35 reproducing the detailed statistical structure of neural activity across large neural populations. In 36 contrast, bottom-up approaches grounded in probabilistic modeling, statistical physics, or deep neu-37 ral networks, can yield concise and accurate models of the joint activity of the neural population in 38 an unsupervised fashion [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. Unfortunately, these 39 models are difficult to relate to the mechanistic aspects of neural circuit operation or computation, 40 because they use architectures and learning rules that are non-biological or non-scalable. 41 A neural circuit that would learn to estimate the probability of its inputs would merge these two 42 approaches: rather than implementing particular tasks or extracting specific stimulus features, com-43 puting the likelihood of the input gives a universal 'currency' for the neural computation of different 44 circuits. Such circuit could be used and reused by the brain as a recurring m...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.