The fruit fly's natural visual environment is often characterized by light intensities ranging across several orders of magnitude and by rapidly varying contrast across space and time. Fruit fly photoreceptors robustly transduce and, in conjunction with amacrine cells, process visual scenes and provide the resulting signal to downstream targets. Here we model the first step of visual processing in the photoreceptor-amacrine cell layer. We propose a novel divisive normalization processor (DNP) for modeling the computation taking place in the photoreceptor-amacrine cell layer. The DNP explicitly models the photoreceptor feedforward and temporal feedback processing paths and the spatio-temporal feedback path of the amacrine cells. We then formally characterize the contrast gain control of the DNP and provide sparse identification algorithms that can efficiently identify each the feedforward and feedback DNP components. The algorithms presented here are the first demonstration of tractable and robust identification of the components of a divisive normalization processor. The sparse identification algorithms can be readily employed in experimental settings, and their effectiveness is demonstrated with several examples.
SummaryThe Fruit Fly Brain Observatory (FFBO) is a collaborative effort between experimentalists, theorists and computational neuroscientists at Columbia University, National Tsing Hua University and Sheffield University with the goal to (i) create an open platform for the emulation and biological validation of fruit fly brain models in health and disease, (ii) standardize tools and methods for graphical rendering, representation and manipulation of brain circuits, (iii) standardize tools for representation of fruit fly brain data and its abstractions and support for natural language queries, (iv) create a focus for the neuroscience community with interests in the fruit fly brain and encourage the sharing of fruit fly brain structural data and executable code worldwide. NeuroNLP and NeuroGFX, two key FFBO applications, aim to address two major challenges, respectively: i) seamlessly integrate structural and genetic data from multiple sources that can be intuitively queried, effectively visualized and extensively manipulated, ii) devise executable brain circuit models anchored in structural data for understanding and developing novel hypotheses about brain function. NeuroNLP enables researchers to use plain English (or other languages) to probe biological data that are integrated into a novel database system, called NeuroArch, that we developed for integrating biological and abstract data models of the fruit fly brain. With powerful 3D graphical visualization, NeuroNLP presents a highly accessible portal for the fruit fly brain data. NeuroGFX provides users highly intuitive tools to execute neural circuit models with Neurokernel, an open-source platform for emulating the fruit fly brain, with full data support from the NeuroArch database and visualization support from an interactive graphical interface. Brain circuits can be configured with high flexibility and investigated on multiple levels, e.g., whole brain, neuropil, and local circuit levels. The FFBO is publicly available and accessible at http://fruitflybrain.org from any modern web browsers, including those running on smartphones. *
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.