The task of blood vessel segmentation in microscopy images is crucial for many diagnostic and research applications. However, vessels can look vastly different, depending on the transient imaging conditions, and collecting data for supervised training is laborious.We present a novel deep learning method for unsupervised segmentation of blood vessels. The method is inspired by the field of active contours and we introduce a new loss term, which is based on the morphological Active Contours Without Edges (ACWE) optimization method. The role of the morphological operators is played by novel pooling layers that are incorporated to the network's architecture.We demonstrate the challenges that are faced by previous supervised learning solutions, when the imaging conditions shift. Our unsupervised method is able to outperform such previous methods in both the labeled dataset, and when applied to similar but different datasets. Our code, as well as efficient pytorch reimplementations of the baseline methods VesselNN and DeepVess is available on GitHub https://github.com/shirgur/UMIS
.
Significance:
rPySight brings a flexible and highly customizable open-software platform built around a powerful multichannel digitizer; combined, it enables performing complex photon counting-based experiments. We exploited advanced programming technology to share the photon counting stream with the graphical processing unit (GPU), making possible real-time display of two-dimensional (2D) and three-dimensional (3D) experiments and paving the road for other real-time applications.
Aim:
Photon counting improves multiphoton imaging by providing better signal-to-noise ratio in photon-deprived applications and is becoming more widely implemented, as indicated by its increasing presence in many microscopy vendor portfolios. Despite the relatively easy access to this technology offered in commercial systems, these remain limited to one or two channels of data and might not enable highly tailored experiments, forcing most researchers to develop their own electronics and code. We set to develop a flexible and open-source interface to a cutting-edge multichannel fast digitizer that can be easily integrated into existing imaging systems.
Approach:
We selected an advanced multichannel digitizer capable of generating 70M tags/s and wrote an open software application, based on Rust and Python languages, to share the stream of detected events with the GPU, enabling real-time data processing.
Results:
rPySight functionality was showcased in real-time monitoring of 2D imaging, improved calcium imaging, multiplexing, and 3D imaging through a varifocal lens. We provide a detailed protocol for implementing out-of-the-box rPySight and its related hardware.
Conclusions:
Applying photon-counting approaches is becoming a fundamental component in recent technical developments that push well beyond existing acquisition speed limitations of classical multiphoton approaches. Given the performance of rPySight, we foresee its use to capture, among others, the joint dynamics of hundreds (if not thousands) of neuronal and vascular elements across volumes, as is likely required to uncover in a much broader sense the hemodynamic transform function.
Imaging increasingly large neuronal populations at high rates pushed multi-photon microscopy into the photon-deprived regime. We present PySight, an add-on hardware and software solution tailored for photon-deprived imaging conditions. PySight more than triples the median amplitude of neuronal calcium transients in awake mice, and facilitates single-trial intravital voltage imaging in fruit flies. Its unique data streaming architecture allowed us to image a fruit fly's olfactory response over 234×600×330µm 3 at 73 volumes per second, outperforming top-tier imaging setups while retaining over 200 times lower data rates. PySight requires no electronics expertise or custom synchronization boards, and its open-source software is extensible to any imaging method based on single-pixel (bucket) detectors. PySight offers an optimal data acquisition scheme for ever increasing imaging volumes of turbid living tissue.
Recently developed methods for rapid continuous volumetric two-photon microscopy facilitate the observation of neuronal activity in hundreds of individual neurons and changes in blood flow in adjacent blood vessels across a large volume of living brain at unprecedented spatio-temporal resolution. However, the high imaging rate necessitates fully automated image analysis, whereas tissue turbidity and photo-toxicity limitations lead to extremely sparse and noisy imagery. In this work, we extend a recently proposed deep learning volumetric blood vessel segmentation network, such that it supports temporal analysis. With this technology, we are able to track changes in cerebral blood volume over time and identify spontaneous arterial dilations that propagate towards the pial surface. This new capability is a promising step towards characterizing the hemodynamic response function upon which functional magnetic resonance imaging (fMRI) is based.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.