Image denoising or artefact removal using deep learning is possible in the availability of supervised training dataset acquired in real experiments or synthesized using known noise models. Neither of the conditions can be fulfilled for nanoscopy (super-resolution optical microscopy) images that are generated from microscopy videos through statistical analysis techniques. Due to several physical constraints, a supervised dataset cannot be measured. Further, the non-linear spatio-temporal mixing of data and valuable statistics of fluctuations from fluorescent molecules that compete with noise statistics. Therefore, noise or artefact models in nanoscopy images cannot be explicitly learned. Here, we propose a robust and versatile simulation-supervised training approach of deep learning auto-encoder architectures for the highly challenging nanoscopy images of sub-cellular structures inside biological samples. We show the proof of concept for one nanoscopy method and investigate the scope of generalizability across structures, and nanoscopy algorithms not included during simulation-supervised training. We also investigate a variety of loss functions and learning models and discuss the limitation of existing performance metrics for nanoscopy images. We generate valuable insights for this highly challenging and unsolved problem in nanoscopy, and set the foundation for the application of deep learning problems in nanoscopy for life sciences.
High resolution microscopy is heavily dependent on superb optical elements and superresolution microscopy even more so. Correcting unavoidable optical aberrations during post-processing is an elegant method to reduce the optical system's complexity. A prime method that promises superresolution, aberration correction, and quantitative phase imaging is Fourier ptychography. This microscopy technique combines many images of the sample, recorded at differing illumination angles akin to computed tomography and uses error minimisation between the recorded images with those generated by a forward model. The more precise knowledge of those illumination angles is available for the image formation forward model, the better the result. Therefore, illumination estimation from the raw data is an important step and supports correct phase recovery and aberration correction. Here, we derive how illumination estimation can be cast as an object detection problem that permits the use of a fast convolutional neural network (CNN) for this task. We find that faster-RCNN delivers highly robust results and outperforms classical approaches by far with an up to 3-fold reduction in estimation errors. Intriguingly, we find that conventionally beneficial smoothing and filtering of raw data is counterproductive in this type of application. We present a detailed analysis of the network's performance and provide all our developed software openly.
Multispectral quantitative phase imaging (MS-QPI) is a high-contrast label-free technique for morphological imaging of the specimens. The aim of the present study is to extract spectral dependent quantitative information in single-shot using a highly spatially sensitive digital holographic microscope assisted by a deep neural network. There are three different wavelengths used in our method: λ=532, 633, and 808 nm. The first step is to get the interferometric data for each wavelength. The acquired datasets are used to train a generative adversarial network to generate multispectral (MS) quantitative phase maps from a single input interferogram. The network was trained and validated on two different samples: the optical waveguide and MG63 osteosarcoma cells. Validation of the present approach is performed by comparing the predicted MS phase maps with numerically reconstructed (FT+TIE) phase maps and quantifying with different image quality assessment metrices.
Abstract-Multi-target tracking is still recent approach which is used to find the same object across different camera views and also used to find the location and sizes of different object at different places [7]. Tracking and detection of moving objects are challenging research topic of many computer vision applications. Nowadays, the demand of surveillance camera is increasing rapidly which is useful for developing surveillance as well as monitoring purpose. Some previous methods are used for multi-target tracking that are color histogram, brightness transfer function (BTF) [11]. Many times it is not possible to cover complete area of interest by using single camera, such a cases there is need to use multi-target tracking system with non-overlapping field of views (FOV) [2]. In this paper we use method of feature extractions that is AdaBoost. The paper proposes the reference set based tracking in non-overlapping FOV"s due to overlapping FOV"s are having high cost. In this work widely used features HSV color histograms, Local Binary Pattern (LBP), Histogram of Gradient (HOG) are used to extract color, texture, shape of target. We use LBP, HOG, HSV color histogram features to determine person"s characteristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.