BackgroundObstructive sleep apnea (OSA) is a public health problem. Detailed analysis of the para-pharyngeal fat pads can help us to understand the pathogenesis of OSA and may mediate the intervention of this sleeping disorder. A reliable and automatic para-pharyngeal fat pads segmentation technique plays a vital role in investigating larger data bases to identify the anatomic risk factors for the OSA.MethodsOur research aims to develop a context-based automatic segmentation algorithm to delineate the fat pads from magnetic resonance images in a population-based study. Our segmentation pipeline involves texture analysis, connected component analysis, object-based image analysis, and supervised classification using an interactive visual analysis tool to segregate fat pads from other structures automatically.ResultsWe developed a fully automatic segmentation technique that does not need any user interaction to extract fat pads. Our algorithm is fast enough that we can apply it to population-based epidemiological studies that provide a large amount of data. We evaluated our approach qualitatively on thirty datasets and quantitatively against the ground truths of ten datasets resulting in an average of approximately 78% detected volume fraction and a 79% Dice coefficient, which is within the range of the inter-observer variation of manual segmentation results.ConclusionThe suggested method produces sufficiently accurate results and has potential to be applied for the study of large data to understand the pathogenesis of the OSA syndrome.
Objectives This study was conducted in order to evaluate the effect of geometric distortion (GD) on MRI lung volume quantification and evaluate available manual, semi-automated, and fully automated methods for lung segmentation. Methods A phantom was scanned with MRI and CT. GD was quantified as the difference in phantom’s volume between MRI and CT, with CT as gold standard. Dice scores were used to measure overlap in shapes. Furthermore, 11 subjects from a prospective population-based cohort study each underwent four chest MRI acquisitions. The resulting 44 MRI scans with 2D and 3D Gradwarp were used to test five segmentation methods. Intraclass correlation coefficient, Bland–Altman plots, Wilcoxon, Mann–Whitney U , and paired t tests were used for statistics. Results Using phantoms, volume differences between CT and MRI varied according to MRI positions and 2D and 3D Gradwarp correction. With the phantom located at the isocenter, MRI overestimated the volume relative to CT by 5.56 ± 1.16 to 6.99 ± 0.22% with body and torso coils, respectively. Higher Dice scores and smaller intraobject differences were found for 3D Gradwarp MR images. In subjects, semi-automated and fully automated segmentation tools showed high agreement with manual segmentations (ICC = 0.971–0.993 for end-inspiratory scans; ICC = 0.992–0.995 for end-expiratory scans). Manual segmentation time per scan was approximately 3–4 h and 2–3 min for fully automated methods. Conclusions Volume overestimation of MRI due to GD can be quantified. Semi-automated and fully automated segmentation methods allow accurate, reproducible, and fast lung volume quantification. Chest MRI can be a valid radiation-free imaging modality for lung segmentation and volume quantification in large cohort studies. Key Points • Geometric distortion varies according to MRI setting and patient positioning. • Automated segmentation methods allow fast and accurate lung volume quantification. • MRI is a valid radiation-free alternative to CT for quantitative data analysis. Electronic supplementary material The online version of this article (10.1007/s00330-018-5863-7) contains supplementary material, which is available to authorized users.
BackgroundQuantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers.ResultsOur presented automatic approach for hepatocyte (HC) quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%.ConclusionsThe proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.
Summary. Segmentation and surface extraction from 3D imaging data is an important task in medical applications. When dealing with scalar data such as CT or MRI scans, a simple thresholding in form of isosurface extraction is an often a good choice. Isosurface extraction is a standard tool for visualizing scalar volume data. Its generalization to color data such as cryosections, however, is not straightforward. In particular, the user interaction in form of selection of the isovalue needs to be replaced by the selection of a three-dimensional region in feature space. We present a user-friendly tool for segmentation and surface extraction from color volume data. Our approach consists of several automated steps and an intuitive mechanism for user-guided feature selection. Instead of overburden the user with complicated operations in feature space, we perform an automated clustering of the occurring colors and suggest segmentations to the users. The suggestions are presented in a color table, from which the user can select the desired cluster. Simple and intuitive refinement methods are provided, in case the automated clustering algorithms did not immediately generate the desired solution exactly. Finally, a marching technique is presented to extract the boundary surface of the desired cluster in object space.
The GPU programmability opens a new perspective for algorithms that have not been studied and used for real applications on commodity state-of-the-art hardware due to their computational expenses. In this paper, we present three implementations of a partitioning algorithm for multi-channel images, which extends an original algorithm for singlechannel images presented in the early 1990's. The segmentation algorithm is based on the information theory concept of minimum description length, which leads to the formulation of an energy functional. The optimal solution is obtained by minimizing the functional. The minimization approach follows a graduated non-convexity approach, which leads to a fully explicit scheme. As the scheme is applied to all pixels of the image simultaneously, it is naturally parallelizable. Besides the optimized sequential implementation in C++ we developed a GLSL version of the algorithm using vertex and fragment shaders as well as a CUDA version usi ng global memory, shared memory, and texture memory. We compare the performance of the implementations, discuss the implementation details, and show that suitability of this algorithm for GPU allows it to become a comparable alternative to the modern partitioning algorithm (multi-label Graph-Cuts)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.