Background: Cell nuclei segmentation is a fundamental task in microscopy image analysis, based on which multiple biological related analysis can be performed. Although deep learning (DL) based techniques have achieved state-of-the-art performances in image segmentation tasks, these methods are usually complex and require support of powerful computing resources. In addition, it is impractical to allocate advanced computing resources to each dark-or bright-field microscopy, which is widely employed in vast clinical institutions, considering the cost of medical exams. Thus, it is essential to develop accurate DL based segmentation algorithms working with resources-constraint computing. Results: An enhanced, light-weighted U-Net (called U-Net+) with modified encoded branch is proposed to potentially work with low-resources computing. Through strictly controlled experiments, the average IOU and precision of U-Net+ predictions are confirmed to outperform other prevalent competing methods with 1.0% to 3.0% gain on the first stage test set of 2018 Kaggle Data Science Bowl cell nuclei segmentation contest with shorter inference time. Conclusions: Our results preliminarily demonstrate the potential of proposed U-Net+ in correctly spotting microscopy cell nuclei with resources-constraint computing.
Acute lymphoblastic leukemia (ALL) is a blood cancer which leads 111,000 depth globally in 2015. Recently, diagnosing ALL often involves the microscopic image analysis with the help of deep learning (DL) techniques. However, as most medical related problems, deficiency training samples and minor visual difference between ALL and normal cells make the image analysis task quite challenging. Herein, an augmented image enhanced bagging ensemble learning with elaborately designed training subsets were proposed to tackle above challenges. The weighted F1-scores of preliminary test set and final test are 0.84 and 0.88 respectively employing our ensemble model predictions and ranked within top 10% in ISBI-2019 Classification of Normal vs. Malignant White Blood Cancer Cells contest. Our results preliminarily show the efficacy and accuracy of employing DL based techniques in ALL cells image analysis.
Optical tomography has a wide range of biomedical applications. Accurate prediction of photon transport in media is critical, as it directly affects the accuracy of the reconstructions. The radiative transfer equation (RTE) is the most accurate deterministic forward model, yet it has not been widely employed in practice due to the challenges in robust and efficient numerical implementations in high dimensions. Herein, we propose a method that combines the discrete ordinate method (DOM) with a streamline diffusion modified continuous Galerkin method to numerically solve RTE. Additionally, a phase function normalization technique was employed to dramatically reduce the instability of the DOM with fewer discrete angular points. To illustrate the accuracy and robustness of our method, the computed solutions to RTE were compared with Monte Carlo (MC) simulations when two types of sources (ideal pencil beam and Gaussian beam) and multiple optical properties were tested. Results show that with standard optical properties of human tissue, photon densities obtained using RTE are, on average, around 5% of those predicted by MC simulations in the entire/deeper region. These results suggest that this implementation of the finite element method-RTE is an accurate forward model for optical tomography in human tissues.
Fluorescence molecular tomography (FMT), as well as mesoscopic FMT (MFMT) is widely employed to investigate molecular level processes ex vivo or in vivo. However, acquiring depth-localized and less blurry reconstruction still remains challenging, especially when fluorophore (dye) is located within large scattering coefficient media. Herein, a two-stage deep learning-based three-dimensional (3-D) reconstruction algorithm is proposed. The key point for the proposed algorithm is to employ a 3-D convolutional neural network to correctly predict the boundary of reconstructions, leading refined results. Compared with conventional algorithm, in silico experiments show that relative volume and absolute centroid error reduce over ∼50% whereas intersection over union increases over 15% for most situations. These results preliminarily indicate the promising future of appropriately applying machine learning (deep learning)-based methods in MFMT.
Some dental lesions are difficult to detect with traditional anatomical imaging methods, such as, with visual observation, dental radiography and X-ray computed tomography (CT). Therefore, we investigated the viability of using an optical imaging technique, Mesoscopic Fluorescence Molecular Tomography (MFMT) to retrieve molecular contrast in dental samples. To establish feasibility of obtaining 3-D images in teeth using MFMT, molecular contrast was simulated using a dye-filled capillary that was placed in the lower half of human tooth ex vivo. The dye and excitation wavelength were chosen to be excited at 650-660 nm in order to simulate a carious lesion. The location of the capillary was varied by changing the depth from the surface at which the dye, at various concentrations, was introduced. MFMT reconstructions were benchmarked against micro-CT. Overall; MFMT exhibited a location accuracy of ~15% and a volume accuracy of ~15%, up to 2 mm depth with moderate dye concentrations. These results demonstrate the potential of MFMT to retrieve molecular contrast in 3-D in highly scattering tissues, such as teeth.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.