Visual neuroscientists require accurate control of visual stimulation. However, few stimulator solutions simultaneously offer high spatio-temporal resolution and free control over the spectra of the light sources, because they rely on off-the-shelf technology developed for human trichromatic vision. Importantly, consumer displays fail to drive UV-shifted short wavelength-sensitive photoreceptors, which strongly contribute to visual behaviour in many animals, including mice, zebrafish and fruit flies. Moreover, many non-mammalian species feature more than three spectral photoreceptor types. Here, we present a flexible, spatial visual stimulator with up to six arbitrary spectrum chromatic channels. It combines a standard digital light processing engine with open source hard- and software that can be easily adapted to the experimentalist’s needs. We demonstrate the capability of this general visual stimulator experimentally in the in vitro mouse retinal whole-mount and the in vivo zebrafish. With this work, we intend to start a community effort of sharing and developing a common stimulator design for vision research.
SummaryPressures for survival drive sensory circuit adaption to a species’ habitat, making it essential to statistically characterise natural scenes. Mice, a prominent visual system model, are dichromatic with enhanced sensitivity to green and UV. Their visual environment, however, is rarely considered. Here, we built a UV-green camera to record footage from mouse habitats. We found chromatic contrast to greatly diverge in the upper but not the lower visual field, an environmental difference that may underlie the species’ superior colour discrimination in the upper visual field. Moreover, training an autoencoder on upper but not lower visual field scenes was sufficient for the emergence of colour-opponent filters. Furthermore, the upper visual field was biased towards dark UV contrasts, paralleled by more light-offset-sensitive cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.Lead contactFurther information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Thomas Euler (thomas.euler@cin.uni-tuebingen.de)
The retina decomposes visual stimuli into parallel channels that encode different features of the visual environment. Central to this computation is the synaptic processing in a dense layer of neuropil, the so-called inner plexiform layer (IPL). Here, different types of bipolar cells stratifying at distinct depths relay the excitatory feedforward drive from photoreceptors to amacrine and ganglion cells. Current experimental techniques for studying processing in the IPL do not allow imaging the entire IPL simultaneously in the intact tissue. Here, we extend a two-photon microscope with an electrically tunable lens allowing us to obtain optical vertical slices of the IPL, which provide a complete picture of the response diversity of bipolar cells at a "single glance". The nature of these axial recordings additionally allowed us to isolate and investigate batch effects, i.e. inter-experimental variations resulting in systematic differences in response speed. As a proof of principle, we developed a simple model that disentangles biological from experimental causes of variability and allowed us to recover the characteristic gradient of response speeds across the IPL with higher precision than before. Our new framework will make it possible to study the computations performed in the central synaptic layer of the retina more efficiently. The primary excitatory pathway of the mouse retina consists of photoreceptors, bipolar cells (BCs) and retinal ganglion cells (RGCs) (reviewed in refs. 1,2). At the core of this pathway is the inner plexiform layer (IPL), a dense synaptic plexus composed of the axon terminals of BCs, the neurites of amacrine cells, as well as the dendrites of RGCs. Specifically, the photoreceptor signal is relayed by the BCs to the RGCs via glutamatergic synapses (reviewed in ref. 3). This "vertical" transmission is shaped by mostly inhibitory interactions with amacrine cells, which integrate signals laterally along and/or vertically across the IPL (reviewed in ref. 4). Amacrine cells modulate, for instance, the sensitivity of BCs to certain spatio-temporal features 5-7. Within the IPL, the axon terminals of each of the 14 BC types 8-12 project to a distinct depth with axonal profiles of different BC types partially overlapping and jointly covering the whole depth of the IPL 10,11,13. Functionally, each BC type constitutes a particular feature channel, with certain temporal dynamics 7 , including On and Off BC types sensitive to light increments or decrements, respectively 14 , different kinetics 15,16 , and chromatic signals 17,18. Some of these features are systematically mapped across the IPL: For example, On BCs project to the inner and Off BCs to the outer portion of the IPL 14,19. Also kinetic response properties appear to be mapped, with the axonal profiles of more transient BCs localised in the IPL centre 7,15,20,21. To study BC function, early studies mostly used single-cell electrical recordings in vertical slices, where many lateral connections (e.g. large-scale amacrine cells) are severed, o...
Although 2-D canonical correlation analysis (2DCCA) has been proposed to reduce the computational complexity while reserving local data structure of image, the learned canonical variables of 2DCCA are the linear combination of all the original variables, which makes it hard to interpret the solutions and might have less generality. In this paper, we propose a sparse 2-D canonical correlation analysis (S2DCCA) to solve the drawbacks of the 2DCCA method and apply it to image feature extraction. The basic idea of S2DCCA is to impose two lasso penalties on the objective function of 2DCCA to obtain two sets of sparse projection directions via low rank matrix approximation. We conduct extensive experiments on both FERET and AR databases to evaluate the performance of the proposed method.Index Terms-Canonical correlation analysis, low rank matrix approximation, sparse 2-D canonical correlation analysis.
Variability, stochastic or otherwise, is a central feature of neural circuits. Yet the means by which variation and uncertainty are derived from noisy observations of neural activity is often unprincipled, with too much weight placed on numerical convenience at the cost of statistical rigour. For two-photon imaging data, composed of fundamentally probabilistic streams of photon detections, the problem is particularly acute. Here, we present a complete statistical pipeline for the inference and analysis of neural activity using Gaussian Process Regression, applied to two-photon recordings of light-driven activity in ex vivo mouse retina. We demonstrate the flexibility and extensibility of these models, considering cases with non-stationary statistics, driven by complex parametric stimuli, in signal discrimination, hierarchical clustering and inference tasks. Sparse approximation methods allow these models to be fitted rapidly, permitting them to actively guiding the design of light stimulation in the midst of ongoing two-photon experiments.
D. Du et al. related fields. The collected dataset is formed by 3, 360 images, including 2, 460 images for training, and 900 images for testing. Specifically, we manually annotate persons with points in each video frame. There are 14 algorithms from 15 institutes submitted to the VisDrone-CC2020 Challenge. We provide a detailed analysis of the evaluation results and conclude the challenge. More information can be found at the website: http://www.aiskyeye.com/.
Visual neuroscientists require accurate control of visual stimulation. However, few stimulator solutions simultaneously offer high spatio-temporal resolution and free control over the spectra of the light sources, because they rely on off-the-shelf technology developed for human trichromatic vision.Importantly, consumer displays fail to drive UV-shifted short wavelength-sensitive photoreceptors, which strongly contribute to visual behaviour in many animals, including mice, zebrafish and fruit flies.Moreover, many non-mammalian species feature more than three spectral photoreceptor types. Here, we present a flexible, spatial visual stimulator with up to 6 arbitrary spectrum chromatic channels. It combines a standard digital light processing engine with open source hard-and software that can be easily adapted to the experimentalist's needs. We demonstrate the capability of this general visual stimulator experimentally in the in vitro mouse retinal whole-mount and the in vivo zebrafish. Hereby, we intend starting a community effort of sharing and developing a common stimulator design.to transfer such results to other species, it is critical to keep in mind that each species is adapted to different environments and employs different strategies to survive and procreate (reviewed in Baden and Osorio, 2018 ). In vision research, classical studies often used monkeys and cats as model organisms, which with respect to visual stimuli, e.g. in terms of spatial resolution and spectral sensitivity range, have similar requirements as humans. Today, frequently used animal models --such as Drosophila , zebrafish or rodents --feature adaptations of their visual systems "outside the specifications" for human vision: For instances, all of the aforementioned species possess UV-sensitive photoreceptors, zebrafish have tetrachromatic vision, and both zebrafish and Drosophila display higher flicker fusion frequencies than most mammals (reviewed in Marshall and Arikawa, 2014 ;Boström et al., 2016 ). Still, many studies in these species use visual stimulation devices
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.