It is commonly believed that higher order smoothness should be modeled using higher order interactions. For example, 2nd order derivatives for deformable (active) contours are represented by triple cliques. Similarly, the 2nd order regularization methods in stereo predominantly use MRF models with scalar (1D) disparity labels and triple clique interactions. In this paper we advocate a largely overlooked alternative approach to stereo where 2nd order surface smoothness is represented by pairwise interactions with 3D-labels, e.g. tangent planes. This general paradigm has been criticized due to perceived computational complexity of optimization in higher-dimensional label space. Contrary to popular beliefs, we demonstrate that representing 2nd order surface smoothness with 3D labels leads to simpler optimization problems with (nearly) submodular pairwise interactions. Our theoretical and experimental results demonstrate advantages over state-of-the-art methods for 2nd order smoothness stereo. 1
Background: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image. Results: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones). Conclusion: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request at www.recomia.org for research purposes.
The aim of this study was to develop a deep learning-based method for segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/ CT-based method for quantifying skeletal tumour burden. Methods: Convolutional neural networks (CNNs) were trained to segment 49 bones using manual segmentations from 100 CT scans. After training, the CNN-based segmentation method was tested on 46 patients with prostate cancer, who had undergone 18 F-choline-PET/CT and 18 F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the segmentations. The network's performance was compared with manual segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual segmentations of these five bones was assessed using the Sørensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method. Results: The median (SD) volumes of the five selected bones were by CNN and manual segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based segmentations was 5-6% for the vertebral column and ribs and ≤3% for other bones. Conclusion:The new deep learning-based method for automated segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal segmentation as a key part of quantification of skeletal metastases.
Abstract-We introduce a multi-region model for simultaneous segmentation of medical images. In contrast to many other models, geometric constraints such as inclusion and exclusion between the regions are enforced, which makes it possible to correctly segment different regions even if the intensity distributions are identical. We efficiently optimize the model using a combination of graph cuts and Lagrangian duality which is faster and more memory efficient than current state of the art. As the method is based on global optimization techniques, the resulting segmentations are independent of initialization. We apply our framework to the segmentation of the left and right ventricles, myocardium and the left ventricular papillary muscles in MRI and to lung segmentation in full-body X-ray CT. We evaluate our approach on a publicly available benchmark with competitive results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.