Introduction The intricate microcircuitry of the cerebral cortex is thought to be a critical substrate from which arise the impressive capabilities of the mammalian brain. Until now, our knowledge of the stereotypical connectivity in neocortical microcircuits has been pieced together from individual studies of the connectivity between small numbers of neuronal cell types. Here, we provide unbiased, large-scale profiling of neuronal cell types and connections to reveal the essential building blocks of the cortex and the principles governing their assembly into cortical circuits. Using advanced techniques for tissue slicing, multiple simultaneous whole-cell recording, and morphological reconstruction, we are able to provide a comprehensive view of the connectivity between diverse types of neurons, particularly among types of γ-aminobutyric acid–releasing (GABAergic) interneurons, in the adult animal. Rationale We took advantage of a method for preparing high-quality slices of adult tissue and combined this technique with octuple simultaneous, whole-cell recordings followed by an improved staining method that allowed detailed recovery of axonal and dendritic arbor morphology. These data allowed us to perform a census of morphologically and electrophysiologically defined neuronal types (primarily GABAergic interneurons) in neocortical layers 1, 2/3, and 5 (L1, L23, and L5, respectively) and to observe their connectivity patterns in adult animals. Results Our large-scale, comprehensive profiling of neocortical neurons differentiated 15 major types of interneurons, in addition to two lamina-defined types of pyramidal neurons (L23 and L5). Cortical interneurons comprise two types in L1 (eNGC and SBC-like), seven in L23 (L23MC, L23NGC, BTC, BPC, DBC, L23BC, and ChC), and six in L5 (L5MC, L5NGC, L5BC, SC, HEC, and DC) (see the figure). Each type has stereotypical electrophysiological properties and morphological features and can be differentiated from all others by cell type-specific axonal geometry and axonal projection patterns. Importantly, each type of neuron has its own characteristic input-output connectivity profile, connecting with other constituent neuronal types with varying degrees of specificity in postsynaptic targets, laminar location, and synaptic characteristics. Despite specific connection patterns for each cell type, we found that a small number of simple connectivity motifs are repeated across layers and cell types defining a canonical cortical microcircuit. Conclusion Our comprehensive profiling of neuronal cell types and connections in adult neocortex provides the most complete wiring diagram of neocortical microcircuits to date. Compared with current genetic labels for cell class, which paint the cortex in broad strokes, our analysis of morphological and electrophysiological properties revealed new cell classes and allowed us to derive a small number of simple connectivity rules that were repeated across layers and cell types. This detailed blueprint of cortical wiring should aid efforts to i...
The neural code is believed to have adapted to the statistical properties of the natural environment. However, the principles that govern the organization of ensemble activity in the visual cortex during natural visual input are unknown. We recorded populations of up to 500 neurons in the mouse primary visual cortex and characterized the structure of their activity, comparing responses to natural movies with those to control stimuli. We found that higher-order correlations in natural scenes induce a sparser code, in which information is encoded by reliable activation of a smaller set of neurons and can be read-out more easily. This computationally advantageous encoding for natural scenes was state-dependent and apparent only in anesthetized and active awake animals, but not during quiet wakefulness. Our results argue for a functional benefit of sparsification that could be a general principle governing the structure of the population activity throughout cortical microcircuits.
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how non-convexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.
WIn this paper we study a new framework introduced by Vapnik (1998) and Vapnik (2006) that is an alternative capacity concept to the large margin approach. In the particular case of binary classification, we are given a set of labeled examples, and a collection of "non-examples" that do not belong to either class of interest. This collection, called the Universum, allows one to encode prior knowledge by representing meaningful concepts in the same domain as the problem at hand. We describe an algorithm to leverage the Universum by maximizing the number of observed contradictions, and show experimentally that this approach delivers accuracy improvements over using labeled data alone
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ''inductive bias,'' determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.
Plants use light as source of energy and information to detect diurnal rhythms and seasonal changes. Sensing changing light conditions is critical to adjust plant metabolism and to initiate developmental transitions. Here we analyzed transcriptome-wide alterations in gene expression and alternative splicing (AS) of etiolated seedlings undergoing photomorphogenesis upon exposure to blue, red, or white light. Our analysis revealed massive transcriptome reprograming as reflected by differential expression of ~20% of all genes and changes in several hundred AS events. For more than 60% of all regulated AS events, light promoted the production of a presumably protein-coding variant at the expense of an mRNA with nonsense-mediated decay-triggering features. Accordingly, AS of the putative splicing factor REDUCED RED-LIGHT RESPONSES IN CRY1CRY2 BACKGROUND 1 (RRC1), previously identified as a red light signaling component, was shifted to the functional variant under light. Downstream analyses of candidate AS events pointed at a role of photoreceptor signaling only in monochromatic but not in white light. Furthermore, we demonstrated similar AS changes upon light exposure and exogenous sugar supply, with a critical involvement of kinase signaling. We propose that AS is an Plant Cell Advance Publication. Published on November 1, 2016November 1, , doi:10.1105 ©2016 American Society of Plant Biologists. All Rights Reserved 2 integration point of signaling pathways that sense and transmit information regarding the energy availability in plants.
Abstract. This Chapter presents the PASCAL1 Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good "probabilistic predictions", rather than just the usual "point predictions" with no measure of uncertainty, in regression and classification problems. Participants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.