A coarse-grained representation of neuronal network dynamics is developed in terms of kinetic equations, which are derived by a moment closure, directly from the original large-scale integrate-andfire (I&F) network. This powerful kinetic theory captures the full dynamic range of neuronal networks, from the mean-driven limit (a limit such as the number of neurons N 3 ؕ, in which the fluctuations vanish) to the fluctuation-dominated limit (such as in small N networks). Comparison with full numerical simulations of the original I&F network establishes that the reduced dynamics is very accurate and numerically efficient over all dynamic ranges. Both analytical insights and scale-up of numerical representation can be achieved by this kinetic approach. Here, the theory is illustrated by a study of the dynamical properties of networks of various architectures, including excitatory and inhibitory neurons of both simple and complex type, which exhibit rich dynamic phenomena, such as, transitions to bistability and hysteresis, even in the presence of large fluctuations. The implication for possible connections between the structure of the bifurcations and the behavior of complex cells is discussed. Finally, I&F networks and kinetic theory are used to discuss orientation selectivity of complex cells for ''ring-model'' architectures that characterize changes in the response of neurons located from near ''orientation pinwheel centers'' to far from them. N euronal networks, whether real cortical networks (1, 2) or computer models (3, 4), frequently operate in a regime in which spiking is caused by irregular temporal fluctuations of the membrane potential. At this ''cortical operating point,'' the mean membrane potential (e.g., obtained by averaging over many voltage traces under the same stimulus condition or by averaging locally in time), does not reach firing threshold. Thus, the spiking process is fluctuation-driven.A theoretical challenge is to construct efficient and effective representations of such fluctuation-driven networks, which are needed both to ''scale-up'' computational models to large enough regions of the cortex to capture interesting cortical processing (such as optical illusions related to ''contour completion''), and to gain qualitative understanding of the cortical mechanisms underlying this level of cortical processing. In this article, we develop such a construction: Starting with large-scale model networks of integrateand-fire (I&F) neurons, which are sufficiently detailed for modeling neuronal computation of large systems but are difficult to scale-up, we tile the cortex with coarse-grained (CG) patches. Each CG patch is sufficiently small that the cortical architecture does not change systematically across it, yet it is sufficiently large to contain many (hundreds) of neurons. We then derive an effective dynamics to capture the statistical behavior of the many neurons within each CG patch in their interaction with other CG patches. This representation is achieved by a kinetic theory, accomplished by ...
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of Ϸ4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm 2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. Simple and complex cells may have different tasks in visual perception. Cortical cells must represent spatial properties such as surface brightness and color and the perceptual spatial organization of a scene that is the basis of form. Simple cells are necessary for all of these functions because they are the visual cortical neurons that are able to respond monotonically to signed edge contrast. Complex cells, being insensitive to spatial phase, cannot provide a cortical representation of signed contrast, but they are sensitive to texture, firing at elevated rates in response to stimuli within their receptive fields.Although long-standing and with functional implications, the simple͞complex classification is hardly sharp. Recent work by Ringach et al. (2) analyzes the extracellular responses of neurons across many experiments in macaque V1. They find that many V1 cells are neither wholly simple nor wholly complex but lie somewhere in between. And although most cells in V1 might be classified as complex, the cortical layer that receives the bulk of lateral geniculate nucleus (LGN) excitation, 4C, has simple and complex cells in approximately equal proportion.Associated with the simple͞complex classification is the influential hierarchical model of Hubel and Wiesel (1), shown schematically in Fig. 1a, wherein simple cells receive geniculate drive and the pooling of their phase-specific outputs drives the phase invariant responses of complex cells. As we argue in Results, this conception seems difficult to reconcile with recent experimental evidence. Chance, Nelson, and Abbott (3) have put forward a very different model that investigates the possible role of recurrent excitation in creating complex cells. In their model, the phase-specific outputs of excitatory simple cells drive cells coupled together in an excitatory recurrent network (Fig. 1b). Realizing this architecture by using a rate model for network activity, they find phase invariant, complex responses when recurrent excitation in the network dominates that of the simple cell inputs.Here, we study a large-scale model of the neuronal dynamics in layer 4C␣ of macaque V1, whose architecture is known better than for almost any other cortical area. Ou...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.