2021
DOI: 10.1038/s41377-021-00506-9
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent neural network-based volumetric fluorescence microscopy

Abstract: Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 38 publications
(29 citation statements)
references
References 82 publications
1
28
0
Order By: Relevance
“…We can see that the model trained for the original data (without downsampling) works well for the 2x downsampling, and is slightly degraded for 4x downsampled z-stacks. The spacing of 4x downsampled images in z-axis is comparable to the spacing Huang et al demonstrated [33], but we show good results on samples with much more complicated structures and unwanted backgrounds. The degradation is mainly caused by the gap between captured input layers and the interpolated layers.…”
Section: Psf-based Registrationssupporting
confidence: 81%
See 2 more Smart Citations
“…We can see that the model trained for the original data (without downsampling) works well for the 2x downsampling, and is slightly degraded for 4x downsampled z-stacks. The spacing of 4x downsampled images in z-axis is comparable to the spacing Huang et al demonstrated [33], but we show good results on samples with much more complicated structures and unwanted backgrounds. The degradation is mainly caused by the gap between captured input layers and the interpolated layers.…”
Section: Psf-based Registrationssupporting
confidence: 81%
“…Over the last few years researchers have developed early examples of deep convolutional neural networks to enhance axial resolution and imaging contrast of wide-field images [29][30][31][32][33]. Specifically, Zhang et al [29] first successfully transformed wide-field images to background reduced Structured Illumination Microscopy (SIM) images using a 2D neural network.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, current microscopy techniques usually image a certain plane at one time, with three-dimensional (3D) imaging obtained through movements of the focal plane relative to the specimen, such as confocal 1 , 2 , structured illumination 3 6 , and light-sheet microscopy 7 9 . To address this problem, a few emerging imaging techniques 10 15 aimed at simultaneous 3D imaging have been developed recently. Among them, light-field microscopy (LFM) 16 – 18 provides an elegant compact solution by capturing the excited volume simultaneously in a tomographic manner.…”
Section: Introductionmentioning
confidence: 99%
“…However, the previous studies were focused on structural imaging, leaving its utility in functional and quantitative imaging unexplored. In addition to the Bessel-beam excitation, deep learning has also been used to address the out-of-focus issue associated with the Gaussian beam [15], [16]. In particular, conditional generative adversarial networks (cGAN) have been successfully applied to biomedical image analysis [17]- [19].…”
Section: Introductionmentioning
confidence: 99%