Autofocusing is a critical step for high-quality microscopic imaging of specimens, especially for measurements that extend over time covering large fields-of-view. Autofocusing is generally practiced using two main approaches. Hardware-based optical autofocusing methods rely on additional distance sensors that are integrated with a microscopy system. Algorithmic autofocusing methods, on the other hand, regularly require axial scanning through the sample volume, leading to longer imaging times, which might also introduce phototoxicity and photobleaching on the sample. Here, we demonstrate a deep learning-based offline autofocusing method, termed Deep-R, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a specimen that is acquired at an arbitrary out-of-focus plane. We illustrate the efficacy of Deep-R using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Our results reveal that Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.
Focusing criterionAverage time (sec/mm 2 ) Standard deviation (sec/mm 2 ) Vollath4 39 42.91 3.16 Vollath5 39 39.57 3.16 Standard deviation 37.22 3.07 Normalized variance 10 36.50 0.36 Deep-R (CPU) 20.04 0.23 Deep-R (GPU) 2.98 0.08 Table. 1. Comparison of Deep-R computation time per 1 mm 2 of sample FOV (captured using a 20×/0.75NA objective lens) compared against other state-of-the-art autofocusing methods.