Calcium imaging records large-scale neuronal activity with cellular resolution in vivo. Automated, fast, and reliable active neuron segmentation is a critical step in the analysis workflow of utilizing neuronal signals in real-time behavioral studies for discovery of neuronal coding properties. Here, to exploit the full spatiotemporal information in two-photon calcium imaging movies, we propose a 3D convolutional neural network to identify and segment active neurons. By utilizing a variety of two-photon microscopy datasets, we show that our method outperforms state-of-the-art techniques and is on a par with manual segmentation. Furthermore, we demonstrate that the network trained on data recorded at a specific cortical layer can be used to accurately segment active neurons from another layer with different neuron density. Finally, our work documents significant tabulation flaws in one of the most cited and active online scientific challenges in neuron segmentation. As our computationally fast method is an invaluable tool for a large spectrum of real-time optogenetic experiments, we have made our open-source software and carefully annotated dataset freely available online.
Large scale simultaneous recording of fast patterns of neural activity remains challenging. Volumetric imaging modalities such as scanning-beam light-sheet microscopy (LSM) and wide-field light-field microscopy (WFLFM) fall short of the goal due to their complex calibration procedure, low spatial resolution, or high-photobleaching. Here, we demonstrate a hybrid light-sheet light-field microscopy (LSLFM) modality that yields high spatial resolution with simplified alignment of the imaging plane and the excitation plane. This new modality combines the selective excitation of light-sheet illumination with volumetric light-field imaging. This modality overcomes the current limitations of the scanning-beam LSM and WFLFM implementations. Compared with LSM, LSLFM captures volumetric data at a frame rate 50× lower than the rate of LSM and requires no dynamic calibration. Compared with WFLFM, LSLFM produces moderate improvements in spatial resolutions, 10 times improvement in the contrast when imaging fluorescent beads, and 3.2× the signal-to-noise ratio in the detection of neural activity when imaging live zebrafish expressing a genetically encoded calcium sensor.
Optical coherence tomography (OCT) is used for diagnosis of esophageal diseases such as Barrett’s esophagus. Given the large volume of OCT data acquired, automated analysis is needed. Here we propose a bilateral connectivity-based neural network for in vivo human esophageal OCT layer segmentation. Our method, connectivity-based CE-Net (Bicon-CE), defines layer segmentation as a combination of pixel connectivity modeling and pixel-wise tissue classification. Bicon-CE outperformed other widely used neural networks and reduced common topological prediction issues in tissues from healthy patients and from patients with Barrett’s esophagus. This is the first end-to-end learning method developed for automatic segmentation of the epithelium in in vivo human esophageal OCT images.
Cell-level quantitative features of retinal ganglion cells (GCs) are potentially important biomarkers for improved diagnosis and treatment monitoring of neurodegenerative diseases such as glaucoma, Parkinson’s disease, and Alzheimer’s disease. Yet, due to limited resolution, individual GCs cannot be visualized by commonly used ophthalmic imaging systems, including optical coherence tomography (OCT), and assessment is limited to gross layer thickness analysis. Adaptive optics OCT (AO-OCT) enables in vivo imaging of individual retinal GCs. We present an automated segmentation of GC layer (GCL) somas from AO-OCT volumes based on weakly supervised deep learning (named WeakGCSeg), which effectively utilizes weak annotations in the training process. Experimental results show that WeakGCSeg is on par with or superior to human experts and is superior to other state-of-the-art networks. The automated quantitative features of individual GCLs show an increase in structure–function correlation in glaucoma subjects compared to using thickness measures from OCT images. Our results suggest that by automatic quantification of GC morphology, WeakGCSeg can potentially alleviate a major bottleneck in using AO-OCT for vision research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.