Abstract:Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.
Figure 1: The DeepView architecture. (a) The network takes a sparse set of input images shot from different viewpoints. (b, c) The scene is reconstructed using learned gradient descent, producing a multi-plane image (a series of fronto-parallel, RGBA textured planes). (d) The multi-plane image is suitable for real-time, high-quality rendering of novel viewpoints. The result above uses four input views in a 30cm × 20cm rectangular layout. The novel view was rendered with a virtual camera positioned at the centroid of the four input views. More results, including video and an interactive viewer, at: https://augmentedperception.github.io/deepview/ AbstractWe present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from a set of sparse camera viewpoints. The resulting method incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. We show that our method achieves high-quality, state-of-the-art results on two datasets: the Kalantari light field dataset, and a new camera array dataset, Spaces, which we make publicly available.
Prolonged behavioral challenges can cause animals to switch from active to passive coping strategies to manage effort-expenditure during stress; such normally adaptive behavioral state transitions can become maladaptive in psychiatric disorders such as depression. The underlying neuronal dynamics and brainwide interactions important for passive coping have remained unclear. Here, we develop a paradigm to study these behavioral state transitions at cellular-resolution across the entire vertebrate brain. Using brainwide imaging in zebrafish, we observed that the transition to passive coping is manifested by progressive activation of neurons in the ventral (lateral) habenula. Activation of these ventral-habenula neurons suppressed downstream neurons in the serotonergic raphe nucleus and caused behavioral passivity, whereas inhibition of these neurons prevented passivity. Data-driven recurrent neural network modeling pointed to altered intra-habenula interactions as a contributory mechanism. These results demonstrate ongoing encoding of experience features in the habenula, which guides recruitment of downstream networks and imposes a passive coping behavioral strategy.
The goal of understanding living nervous systems has driven interest in high-speed and large field-of-view volumetric imaging at cellular resolution. Light sheet microscopy approaches have emerged for cellular-resolution functional brain imaging in small organisms such as larval zebrafish, but remain fundamentally limited in speed. Here, we have developed SPED light sheet microscopy, which combines large volumetric field-of-view via an extended depth of field with the optical sectioning of light sheet microscopy, thereby eliminating the need to physically scan detection objectives for volumetric imaging. SPED enables scanning of thousands of volumes-per-second, limited only by camera acquisition rate, through the harnessing of optical mechanisms that normally result in unwanted spherical aberrations. We demonstrate capabilities of SPED microscopy by performing fast sub-cellular resolution imaging of CLARITY mouse brains and cellular-resolution volumetric Ca(2+) imaging of entire zebrafish nervous systems. Together, SPED light sheet methods enable high-speed cellular-resolution volumetric mapping of biological system structure and function.
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells that are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser. We demonstrate light field video results using data from the 16-camera rig of [Pozo et al. 2019] as well as a new low-cost hemispherical array made from 46 synchronized action sports cameras. From this data we produce 6 degree of freedom volumetric videos with a wide 70 cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view, at 30 frames per second video frame rates. Advancing over previous work, we show that our system is able to reproduce challenging content such as view-dependent reflections, semi-transparent surfaces, and near-field objects as close as 34 cm to the surface of the camera rig.
BackgroundThe determination and regulation of cell morphology are critical components of cell-cycle control, fitness, and development in both single-cell and multicellular organisms. Understanding how environmental factors, chemical perturbations, and genetic differences affect cell morphology requires precise, unbiased, and validated measurements of cell-shape features.ResultsHere we introduce two software packages, Morphometrics and BlurLab, that together enable automated, computationally efficient, unbiased identification of cells and morphological features. We applied these tools to bacterial cells because the small size of these cells and the subtlety of certain morphological changes have thus far obscured correlations between bacterial morphology and genotype. We used an online resource of images of the Keio knockout library of nonessential genes in the Gram-negative bacterium Escherichia coli to demonstrate that cell width, width variability, and length significantly correlate with each other and with drug treatments, nutrient changes, and environmental conditions. Further, we combined morphological classification of genetic variants with genetic meta-analysis to reveal novel connections among gene function, fitness, and cell morphology, thus suggesting potential functions for unknown genes and differences in modes of action of antibiotics.Conclusions Morphometrics and BlurLab set the stage for future quantitative studies of bacterial cell shape and intracellular localization. The previously unappreciated connections between morphological parameters measured with these software packages and the cellular environment point toward novel mechanistic connections among physiological perturbations, cell fitness, and growth.Electronic supplementary materialThe online version of this article (doi:10.1186/s12915-017-0348-8) contains supplementary material, which is available to authorized users.
Whole-brain recordings give us a global perspective of the brain in action. In this study, we describe a method using light field microscopy to record near-whole brain calcium and voltage activity at high speed in behaving adult flies. We first obtained global activity maps for various stimuli and behaviors. Notably, we found that brain activity increased on a global scale when the fly walked but not when it groomed. This global increase with walking was particularly strong in dopamine neurons. Second, we extracted maps of spatially distinct sources of activity as well as their time series using principal component analysis and independent component analysis. The characteristic shapes in the maps matched the anatomy of subneuropil regions and, in some cases, a specific neuron type. Brain structures that responded to light and odor were consistent with previous reports, confirming the new technique’s validity. We also observed previously uncharacterized behavior-related activity as well as patterns of spontaneous voltage activity.
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective's back focal plane and at the microscope's native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.