We present an ultra-fast, precise, parameter-free method, which we term Deep-STORM, for obtaining super-resolution images from stochastically-blinking emitters, such as fluorescent molecules used for localization microscopy. Deep-STORM uses a deep convolutional neural network that can be trained on simulated data or experimental measurements, both of which are demonstrated. The method achieves state-of-the-art resolution under challenging signal-to-noise conditions and high emitter densities, and is significantly faster than existing approaches. Additionally, no prior information on the shape of the underlying structure is required, making the method applicable to any blinking data-set. We validate our approach by super-resolution image reconstruction of simulated and experimentally obtained data.arXiv:1801.09631v3 [physics.optics]
Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.
Localization microscopy is an imaging technique in which the positions of individual point emitters (e.g. fluorescent molecules) are precisely determined from their images. This is a key ingredient in single/multiple-particle-tracking and super-resolution microscopy. Localization in three-dimensions (3D) can be performed by modifying the image that a point-source creates on the camera, namely, the point-spread function (PSF). The PSF is engineered to vary distinctively with emitter depth, using additional optical elements. However, localizing multiple adjacent emitters in 3D poses a significant algorithmic challenge, due to the lateral overlap of their PSFs. Here, we train a neural network to localize multiple emitters with densely overlapping PSFs over a large axial range. Furthermore, we then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach experimentally with super-resolution reconstructions of mitochondria and volumetric imaging of fluorescently labeled telomeres in cells.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.