Abstract:For more than a century, the wavelength of light was considered to be a fundamental limit on the spatial resolution of optical imaging. Particularly in light microscopy, this limit, known as Abbe's diffraction limit, places a fundamental constraint on the ability to image sub-cellular organelles with high resolution. However, modern microscopy techniques such as STED, PALM, and STORM, manage to recover sub-wavelength information, by relying on fluorescence imaging. Specifically, PALM/STORM acquire large sequen… Show more
“…To circumvent the long acquisition periods required for SMLM methods, a variety of techniques have emerged, which enable the use of a smaller number of frames for reconstructing the 2-D super-resolved image [3][4][5][6][7][8][9]. These techniques take advantage of prior information regarding either the optical setup, the geometry of the sample, or the statistics of the emitters.…”
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance -5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
“…To circumvent the long acquisition periods required for SMLM methods, a variety of techniques have emerged, which enable the use of a smaller number of frames for reconstructing the 2-D super-resolved image [3][4][5][6][7][8][9]. These techniques take advantage of prior information regarding either the optical setup, the geometry of the sample, or the statistics of the emitters.…”
The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization. However, this is achieved at the cost of lengthy imaging times, limiting temporal resolution. In recent years, a variety of approaches have been suggested to reduce imaging times, ranging from classical optimization and statistical algorithms to deep learning methods. Classical methods often rely on prior knowledge of the optical system and require heuristic adjustment of parameters or do not lead to good enough performance. Deep learning methods proposed to date tend to suffer from poor generalization ability outside the specific distribution they were trained on, and require learning of many parameters. They also tend to lead to black-box solutions that are hard to interpret. In this paper, we suggest combining a recent high-performing classical method, SPARCOM, with model-based deep learning, using the algorithm unfolding approach which relies on an iterative algorithm to design a compact neural network considering domain knowledge. We show that the resulting network, Learned SPARCOM (LSPARCOM), requires far fewer layers and parameters, and can be trained on a single field of view. Nonetheless it yields comparable or superior results to those obtained by SPARCOM with no heuristic parameter determination or explicit knowledge of the point spread function, and is able to generalize better than standard deep learning techniques. It even allows producing a high-quality reconstruction from as few as 25 frames. This is due to a significantly smaller network, which also contributes to fast performance -5× improvement in execution time relative to SPARCOM, and a full order of magnitudes improvement relative to a leading competing deep learning method (Deep-STORM) when implemented serially. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets. Thus, we believe LSPARCOM will find broad use in single molecule localization microscopy of biological structures, and pave the way to interpretable, efficient live-cell imaging in a broad range of settings.
“…For the same reason, we have access to the much higher density conditions in our characterization. PRIS also exhibit comparable performance as compared to 2D approaches [3,4,20], regardless of the fact that the 3D PSFs used in our characterizations are imposing an intrinsically more challenging recovery task as compared to the 2D approaches where the regular PSFs are much more compact. When the PSF expands a larger area (SPINDLE), we expect an increase of the overlapping region, and the total budget of photons would also spread over a larger area, resulting in a lower SNR.…”
Section: Recovery With Astigmatic and Spindle Psfs With Single Plane mentioning
confidence: 86%
“…where x is a vector (we dub as the target vector) accounting for all possible signal sources, y is the observation vector corresponding to the observed image, A is the sensing matrix (observation matrix) describing the linear mapping from the signal source to the observation, and represents an additive noise component. The effect of omitting the Poisson noise is negligible as demonstrated by the existing compressive sensing applications for SR microscopy [3,4,20,21]. The following L1-norm regularized sparse recovery solves for x when A and y are known and with the prior knowledge that x is sparse (possesses a small portion of non-zero entries):…”
Section: L1-norm Regularized Sparse Recovery With Progressive Refinementmentioning
confidence: 99%
“…For all the correct localizations, the fitting error is defined as the displacement of the localization result from the ground truth, and the fitting precision is calculated as the standard deviation of the fitting errors. Figure 5(b) shows the comparison of PRIS with the 2D methods [3,4,20] by comparing the standard deviation of fitting errors in the XY-plane (dubbed as xy). We can see that PRIS demonstrate comparable or better performance in the lateral precision at high-density conditions (> 2.5 m -2 ).…”
Section: Recovery With Astigmatic and Spindle Psfs With Single Plane mentioning
confidence: 99%
“…Higher time resolution requires a faster accumulation of localized fluorophores, which can be achieved with localizations at higher emitter densities that require less frames of camera acquisitions. Such methods include fittings of multiple emitters that are demonstrated to work with moderate to high densities [18,19], and compressive sensing methods such as CSSTORM [3], L1-homotopy [4] and SOFI inspired sparse recovery [20,21].…”
Within the family of super-resolution (SR) fluorescence microscopy, single-molecule localization microscopies (PALM[1], STORM [2] and their derivatives) afford among the highest spatial resolution (approximately 5 to 10 nm), but often with moderate temporal resolution. The high spatial resolution relies on the adequate accumulation of precise localizations of bright fluorophores, which requires the bright fluorophores to possess a relatively low spatial density. Several methods have demonstrated localization at higher densities in both two dimensions (2D) [3,4] and three dimensions (3D) [5][6][7]. Additionally, with further advancements, such as functional super-resolution [8,9] and point spread function (PSF) engineering with [8][9][10][11] or without [12] multi-channel observations, extra information (spectra, dipole orientation) can be encoded and recovered at the single molecule level. However, such advancements are not fully extended for highdensity localizations in 3D. In this work, we adopt sparse recovery using simple matrix/vector operations, and propose a systematic progressive refinement method (dubbed as PRIS) for 3D high-density reconstruction. Our method allows for localization reconstruction using experimental PSFs that include the spatial aberrations and fingerprint patterns of the PSFs [13]. We generalized the method for PSF engineering, multi-channel and multi-species observations using different forms of matrix concatenations. Reconstructions with both double-helix and astigmatic PSFs, for both single and biplane settings are demonstrated, together with the recovery capability for a mixture of two different color species.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.