Abstract:This article provides an overview of efforts to advance the field of computational microscopy and optical sensing systems for microscopy using deep neural networks. First, the work overviews the basics of inverse problems in optical microscopy and then outlines how deep learning can be a framework for solving these problems, typically through supervised methods. Then, there is a discussion of use of deep learning to try to obtain single-image super resolution and image enhancement in these data sets.
“…Minimizing (7) with the new expression for f (x) can be performed as shown in Algorithm 1, with only the inputs changed. According to the gradient of (14), v becomes:…”
Section: Algorithm 2 Lsaprcom At Inference Timementioning
confidence: 99%
“…The past decade has seen an explosion in the use of deep learning algorithms [13][14][15][16] across all areas of science. It is thus natural to consider whether temporal resolution can be improved using deep learning techniques.…”
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance -5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
“…Minimizing (7) with the new expression for f (x) can be performed as shown in Algorithm 1, with only the inputs changed. According to the gradient of (14), v becomes:…”
Section: Algorithm 2 Lsaprcom At Inference Timementioning
confidence: 99%
“…The past decade has seen an explosion in the use of deep learning algorithms [13][14][15][16] across all areas of science. It is thus natural to consider whether temporal resolution can be improved using deep learning techniques.…”
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance -5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
“…Convolutional neural network (CNN) and deep learning approaches have been proposed for several optical applications. Examples include virtual staining of non-stained samples [33], increasing spatial resolution in a large field of view in optical microscopy [34,35], color holographic microscopy with CNN [36], autofocusing and enhancing the depth-of-filed in inline holography [37], lens-less computational imaging by deep learning [38], single-cell-based reconstruction distance estimation by a regression CNN model [39], super-resolution fringe patterns by deep learning holography [40], virtual refocusing in fluorescence microscopy to map 2D images to a 3D surface [41], and several other studies [42][43][44]. Deep-learning based phase recovery by residual CNN model was also suggested [45], but the application is limited because the reference noise-free phase images for deep-learning model are generated by the multi-height phase retrieval approach (8 holograms are recorded at different sample-to-sensor distances).…”
Section: Proposed Deep Learning Model For Phase Recoverymentioning
This paper shows that deep learning can eliminate the superimposed twin-image noise in phase images of Gabor holographic setup. This is achieved by the conditional generative adversarial model (C-GAN), trained by input-output pairs of noisy phase images obtained from synthetic Gabor holography and the corresponding quantitative noise-free contrast-phase image obtained by the off-axis digital holography. To train the model, Gabor holograms are generated from digital off-axis holograms with spatial shifting of the real image and twin image in the frequency domain and then adding them with the DC term in the spatial domain. Finally, the digital propagation of the Gabor hologram with Fresnel approximation generates a superimposed phase image for the C-GAN model input. Two models were trained: a human red blood cell model and an elliptical cancer cell model. Following the training, several quantitative analyses were conducted on the biochemical properties and similarity between actual noise-free phase images and the model output. Surprisingly, it is discovered that our model can recover other elliptical cell lines that were not observed during the training. Additionally, some misalignments can also be compensated with the trained model. Particularly, if the reconstruction distance is somewhat incorrect, this model can still retrieve in-focus images.
“…Key 18 among these is the difficulty in relating the abstract accuracy metrics used to score FRM to the 19 practical value of FRM data for actual, quotidian biological analyses such as cell counting or 20 morphological characterization. To better appreciate this, consider first that the quality of FRM is 21 typically assessed using a single numerical metric (P) such as the Mean-Squared-Error or 22 Pearson's Correlation Coefficient that typically range from (0,1) or (-1,1), and second that it is 23 practically impossible to actually reach perfection (P = 1). P can be increased closer to 1 either 24 by training with more images, or by using higher resolution magnification (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The U-Net itself is 50 commonly used in machine learning approaches because it is a lightweight convolutional neural 51 network (CNN) which readily captures information at multiple spatial scales within an image, 52 thereby preserving reconstruction accuracy while reducing the required number of training 53 samples and training time. U-Nets, and related deep learning approaches, have found broad 54 application to live-cell imaging tasks such as cell phenotype classification, feature 55 segmentation 10, [14][15][16][17][18][19] , and histological stain analysis [20][21][22][23] . 56 57…”
Fluorescence reconstruction microscopy (FRM) is an approach where transmitted light images are passed into a convolutional neural network which then outputs predicted epifluorescence images. This approach enables many benefits including reduced phototoxicity, freeing up of fluorescence channels, simplified sample preparation, and the ability to re-process legacy data for new insights. However, current FRM benchmarks are single scores that are difficult to relate to how useful for trustworthy and FRM predicition is. Here, we relate the conventional benchmark to practical and familiar cell biology analyses to demonstrate that FRM should be judged in context. We further demonstrate that it performs remarkably well even with lowermagnification microscopy data, as are often collected in high content imaging. Specifically, we present promising results for nuclei, cell-cell junctions, and fine feature reconstruction; provide experimental design guidelines; and provide the code, sample data, and user manual to enable more widespread adoption of FRM. where S = Scaling B/ Scaling A Ex:S Zeiss->Nikon = 1.4
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.