To develop and evaluate a fully automated algorithm for segmenting the abdomen from CT to quantify body composition. Materials and Methods: For this retrospective study, a convolutional neural network based on the U-Net architecture was trained to perform abdominal segmentation on a data set of 2430 two-dimensional CT examinations and was tested on 270 CT examinations. It was further tested on a separate data set of 2369 patients with hepatocellular carcinoma (HCC). CT examinations were performed between 1997 and 2015. The mean age of patients was 67 years; for male patients, it was 67 years (range, 29-94 years), and for female patients, it was 66 years (range, 31-97 years). Differences in segmentation performance were assessed by using twoway analysis of variance with Bonferroni correction. Results: Compared with reference segmentation, the model for this study achieved Dice scores (mean 6 standard deviation) of 0.98 6 0.03, 0.96 6 0.02, and 0.97 6 0.01 in the test set, and 0.94 6 0.05, 0.92 6 0.04, and 0.98 6 0.02 in the HCC data set, for the subcutaneous, muscle, and visceral adipose tissue compartments, respectively. Performance met or exceeded that of expert manual segmentation. Conclusion: Model performance met or exceeded the accuracy of expert manual segmentation of CT examinations for both the test data set and the hepatocellular carcinoma data set. The model generalized well to multiple levels of the abdomen and may be capable of fully automated quantification of body composition metrics in three-dimensional CT examinations.
Chest computed tomography (CT) imaging has become indispensable for staging and managing coronavirus disease 2019 (COVID-19), and current evaluation of anomalies/abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. We propose anamorphic depth embedding-based lightweight CNN, called Anam-Net, to segment anomalies in COVID-19 chest CT images. The proposed Anam-Net has 7.8 times fewer parameters compared to the state-of-the-art UNet (or its variants), making it lightweight capable of providing inferences in mobile or resource constraint (point-of-care) platforms. The results from chest CT images (test cases) across different experiments showed that the proposed method could provide good Dice similarity scores for abnormal and normal regions in the lung. We have benchmarked Anam-Net with other state-of-the-art architectures, such as ENet, LEDNet, UNet++, SegNet, Attention UNet, and DeepLabV3+. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (Cov-Seg) embedded with Anam-Net to demonstrate its suitability for point-of-care platforms. The generated codes, models, and the mobile application are available for enthusiastic users at https://github.com/NaveenPaluru/Segmentation-COVID-19.
Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to “learn” from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.
Background & aims: Excess adipose tissue may affect colorectal cancer (CRC) patients' disease progression and treatment. In contrast to the commonly used anthropometric measurements, Dual-Energy X-Ray Absorptiometry (DXA) and Computed Tomography (CT) can differentiate adipose tissues. However, these modalities are rarely used in the clinic despite providing high-quality estimates. This study aimed to compare DXA's measurement of abdominal visceral adipose tissue (VAT) and fat mass (FM) against a corresponding volume by CT in a CRC population. Secondly, we aimed to identify the best single lumbar CT slice for abdominal VAT. Lastly, we investigated the associations between anthropometric measurements and VAT estimated by DXA and CT. Methods: Non-metastatic CRC patients between 50-80 years from the ongoing randomized controlled trial CRC-NORDIET were included in this cross-sectional study. Corresponding abdominal volumes were acquired by Lunar iDXA and from clinically acquired CT examinations. Also, single CT slices at L2-, L3-and L4-level were obtained. Agreement between the methods was investigated using univariate linear regression and BlandeAltman plots. Results: Sixty-six CRC patients were included. Abdominal volumetric VAT and FM measured by DXA explained up to 91% and 96% of the variance in VAT and FM by CT, respectively. BlandeAltman plots demonstrated an overestimation of VAT by DXA compared to CT (mean difference of 76 cm 3 ) concurrent with an underestimation of FM (mean difference of À319 cm 3 ). A higher overestimation of VAT (p ¼ 0.015) and underestimation of FM (p ¼ 0.036) were observed in obese relative to normal weight subjects. VAT in a single slice at L3-level showed the highest explained variance against CT volume (R 2 ¼ 0.97), but a combination of three slices (L2, L3, L4) explained a significantly higher variance than L3 alone (R 2 ¼ 0.98, p < 0.006). The anthropometric measurements explained between 31-65% of the variance of volumetric VAT measured by DXA and CT. Conclusions: DXA and the combined use of three CT slices (L2-L4) are valid to predict abdominal volumetric VAT and FM in CRC patients when using volumetric CT as a reference method. Due to the poor
Background Manual assessment of bone marrow signal is time-consuming and requires meticulous standardisation to secure adequate precision of findings. Objective We examined the feasibility of using deep learning for automated segmentation of bone marrow signal in children and adolescents. Materials and methods We selected knee images from 95 whole-body MRI examinations of healthy individuals and of children with chronic non-bacterial osteomyelitis, ages 6–18 years, in a longitudinal prospective multi-centre study cohort. Bone marrow signal on T2-weighted Dixon water-only images was divided into three color-coded intensity-levels: 1 = slightly increased; 2 = mildly increased; 3 = moderately to highly increased, up to fluid-like signal. We trained a convolutional neural network on 85 examinations to perform bone marrow segmentation. Four readers manually segmented a test set of 10 examinations and calculated ground truth using simultaneous truth and performance level estimation (STAPLE). We evaluated model and rater performance through Dice similarity coefficient and in consensus. Results Consensus score of model performance showed acceptable results for all but one examination. Model performance and reader agreement had highest scores for level-1 signal (median Dice 0.68) and lowest scores for level-3 signal (median Dice 0.40), particularly in examinations where this signal was sparse. Conclusion It is feasible to develop a deep-learning-based model for automated segmentation of bone marrow signal in children and adolescents. Our model performed poorest for the highest signal intensity in examinations where this signal was sparse. Further improvement requires training on larger and more balanced datasets and validation against ground truth, which should be established by radiologists from several institutions in consensus.
Purpose To investigate prediction of age older than 18 years in sub-adults using tooth tissue volumes from MRI segmentation of the entire 1st and 2nd molars, and to establish a model for combining information from two different molars. Materials and methods We acquired T2 weighted MRIs of 99 volunteers with a 1.5-T scanner. Segmentation was performed using SliceOmatic (Tomovision©). Linear regression was used to analyse the association between mathematical transformation outcomes of tissue volumes, age, and sex. Performance of different outcomes and tooth combinations were assessed based on the p-value of the age variable, common, or separate for each sex, depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach using information from the 1st and 2nd molars both separately and combined. Results 1st molars from 87 participants, and 2nd molars from 93 participants were included. The age range was 14-24 years with a median age of 18 years. The transformation outcome (high signal soft tissue + low signal soft tissue)/total had the strongest statistical association with age for the lower right 1st (p= 7.1*10-4 for males) and 2nd molar (p=9.44×10-7 for males and p=7.4×10-10 for females). Combining the lower right 1st and 2nd molar in males did not increase the prediction performance compared to using the best tooth alone. Conclusion MRI segmentation of the lower right 1st and 2nd molar might prove useful in the prediction of age older than 18 years in sub-adults. We provided a statistical framework to combine the information from two molars.
Purpose Our aim was to investigate tissue volumes measured by MRI segmentation of the entire 3rd molar for prediction of a sub-adult being older than 18 years. Material and method We used a 1.5-T MR scanner with a customized high-resolution single T2 sequence acquisition with 0.37 mm iso-voxels. Two dental cotton rolls drawn with water stabilized the bite and delineated teeth from oral air. Segmentation of the different tooth tissue volumes was performed using SliceOmatic (Tomovision©). Linear regression was used to analyze the association between mathematical transformation outcomes of the tissue volumes, age, and sex. Performance of different transformation outcomes and tooth combinations were assessed based on the p value of the age variable, combined or separated for each sex depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach. Results We included 67 volunteers (F/M: 45/22), range 14–24 years, median age 18 years. The transformation outcome (pulp + predentine)/total volume for upper 3rd molars had the strongest association with age (p = 3.4 × 10−9). Conclusion MRI segmentation of tooth tissue volumes might prove useful in the prediction of age older than 18 years in sub-adults.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.