We present a new acquisition method that enables high-resolution, fine-detail full reconstruction of the three-dimensional movement and structure of individual human sperm cells swimming freely. We achieve both retrieval of the three-dimensional refractive-index profile of the sperm head, revealing its fine internal organelles and time-varying orientation, and the detailed fourdimensional localization of the thin, highly-dynamic flagellum of the sperm cell. Live human sperm cells were acquired during free swim using a high-speed off-axis holographic system that does not require any moving elements or cell staining. The reconstruction is based solely on the natural movement of the sperm cell and a novel set of algorithms, enabling the detailed fourdimensional recovery. Using this refractive-index imaging approach, we believe we have detected an area in the cell that is attributed to the centriole. This method has great potential for both biological assays and clinical use of intact sperm cells.
Many medical and biological protocols for analyzing individual biological cells involve morphological evaluation based on cell staining, designed to enhance imaging contrast and enable clinicians and biologists to differentiate between various cell organelles. However, cell staining is not always allowed in certain medical procedures. In other cases, staining may be time consuming or expensive to implement. Furthermore, staining protocols may be operator-sensitive, and hence lead to varying analytical results by different users, as well as cause artificial imaging artifacts or false heterogeneity. Here, we present a new deep-learning approach, called HoloStain, which converts images of isolated biological cells acquired without staining by holographic microscopy to their virtually stained images. We demonstrate this approach for human sperm cells, as there is a well-established protocol and global standardization for characterizing the morphology of stained human sperm cells for fertility evaluation, but, on the other hand, staining might be cytotoxic and thus is not allowed during human in vitro fertilization (IVF). We use deep convolutional Generative Adversarial Networks (DCGANs) with training that is based on both the quantitative phase images and two gradient phase images, all extracted from the digital holograms of the stain-free cells, with the ground truth of bright-field images of the same cells that subsequently underwent chemical staining. After the training stage, the deep neural network can take images of unseen sperm cells, retrieved from the coinciding holograms acquired without staining, and convert them to their stain-like images. To validate the quality of our virtual staining approach, an experienced embryologist analyzed the unstained cells, the virtually stained cells, and the chemically stained sperm cells several times in a blinded and randomized manner. We obtained a 5-fold recall (sensitivity) improvement in the analysis results, demonstrating the advantage of using virtual staining for sperm cell analysis. With the introduction of simple holographic imaging methods in clinical settings, the proposed method has a great potential to become a common practice in human IVF procedures, as well as to significantly simplify and facilitate other cell analyses and techniques such as imaging flow cytometry.Submitted
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization, but at the cost of low temporal resolution. We suggest combining SPARCOM, a recent high-performing classical method, with model-based deep learning, using the algorithm unfolding approach, to design a compact neural network incorporating domain knowledge. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets using the proposed learned SPARCOM (LSPARCOM) network. We believe LSPARCOM can pave the way to interpretable, efficient live-cell imaging in many settings, and find broad use in single molecule localization microscopy of biological structures.
We present a deep-learning approach for solving the problem of 2π phase ambiguities in two-dimensional quantitative phase maps of biological cells, using a multi-layer encoderdecoder residual convolutional neural network. We test the trained network, PhUn-Net, on various types of biological cells, captured with various interferometric setups, as well as on simulated phantoms. These tests demonstrate the robustness and generality of the network, even for cells of different morphologies or different illumination conditions than PhUn-Net has been trained on. In this paper, for the first time, we make the trained network publicly available in a global format, such that it can be easily deployed on every platform, to yield fast and robust phase unwrapping, not requiring prior knowledge or complex implementation. By this, we expect our phase unwrapping approach to be widely used, substituting conventional and more time-consuming phase unwrapping algorithms.
We present a multidisciplinary approach for predicting how sperm cells with various morphologies swim in three-dimensions (3D), from milliseconds to much longer time scales at spatial resolutions of less than half a micron. We created the sperm 3D geometry and built a numerical mechanical model using the experimentally acquired dynamic 3D refractive-index profiles of sperm cells swimming in vitro as imaged by high-resolution optical diffraction tomography. By controlling parameters in the model, such as the size and shape of the sperm head and tail, we can then predict how different sperm cells, normal or abnormal, would swim in 3D, in the short or long term. We quantified various 3D structural factor effects on the sperm long-term motility. We found that some abnormal sperm cells swim faster than normal sperm cells, in contrast to the commonly used sperm selection assumption during in vitro fertilization (IVF), according to which sperm cells should mainly be chosen based on their progressive motion. We thus establish a new tool for sperm analysis and male-infertility diagnosis, as well as sperm selection criteria for fertility treatments.
We present a novel high-density single molecule localization microscopy technique, which combines a classical compressed sensing method with deep learning through an algorithm unfolding procedure, yielding a compact and robust neural nehwork considering domain knowledge.
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance -5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.