Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.
Supplementary data are available at Bioinformatics online.
Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both needed, such maps are scarce as they challenge data acquisition and analysis capabilities. Here we present for the mouse inner plexiform layer--the main computational neuropil region in the mammalian retina--the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data. We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.
We describe automated technologies to probe the structure of neural tissue at nanometer resolution and use them to generate a saturated reconstruction of a sub-volume of mouse neocortex in which all cellular objects (axons, dendrites, and glia) and many sub-cellular components (synapses, synaptic vesicles, spines, spine apparati, postsynaptic densities, and mitochondria) are rendered and itemized in a database. We explore these data to study physical properties of brain tissue. For example, by tracing the trajectories of all excitatory axons and noting their juxtapositions, both synaptic and non-synaptic, with every dendritic spine we refute the idea that physical proximity is sufficient to predict synaptic connectivity (the so-called Peters' rule). This online minable database provides general access to the intrinsic complexity of the neocortex and enables further data-driven inquiries.
We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons.
Here we describe an automated method, which we call serial two-photon (STP) tomography, that achieves high-throughput fluorescence imaging of mouse brains by integrating two-photon microscopy and tissue sectioning. STP tomography generates high-resolution datasets that are free of distortions and can be readily warped in 3D, for example, for comparing multiple anatomical tracings. This method opens the door to routine systematic studies of neuroanatomy in mouse models of human brain disorders.
In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. performance with the ML procedures. For direction estimation we focus on a network that computes a population vector by summing the preferred directions of the neurons weighted by their response magnitudes. Some experimental evidence for this scheme has been found in the generation of saccadic eye movements in primates (7). It has also been suggested as a code for the direction of arm movements (8) and as a model of visual orientation estimation (9-11).Here we study the performance of the population vector relative to the optimal ML estimation. An important outcome of our analysis of direction discrimination is that threshold linear models require adaptation to perform well. We calculate theoretical generalization curves for the amount of transfer of learning from a trained stimulus to novel stimuli. Testing these predictions by psychophysical measurements could shed light on the neuronal mechanisms involved in perception and perceptual learning (12-14). ML PROCEDURESPopulation of Direction Selective Neurons. We consider a population of neurons coding for direction in two dimensions, parametrized by 6 from 0 to 2Xr. For example, these could be simple cells in visual cortex coding the direction of motion of a bar stimulus. We characterize the response ofthe ith neuron by a single nonnegative integer ri, the total number of spikes generated by the neuron in a fixed time interval following the onset of the stimulus. Our starting point is the assumption that the response of a neuron to a sensory stimulus is stochastic-namely, that repeated presentations of the same stimulus 6 induce responses that vary in a random fashion. The response of a population of N neurons is described by a conditional probability distribution 9P(rJ6), where the vector notation r is used for the responses r1, . .. , rN.We model the responses {r,} of the population as independent Poisson random variables. The mean of the spike count of the ith neuron is denoted by (ri) = ff(6), where ( ... ) denotes an average with respect to 5P(rl 0). The variance of a Poisson process equals its mean-i.e., ((6r,)2) = f1(6)-where Sri = ri -(ri). A similar linear relationshi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.