Liver segmentation is still a challenging task in medical image processing area due to the complexity of the liver's anatomy, low contrast with adjacent organs, and presence of pathologies. This investigation was used to develop and validate an automated method to segment livers in CT images. The proposed framework consists of three steps: 1) preprocessing; 2) initialization; and 3) segmentation. In the first step, a statistical shape model is constructed based on the principal component analysis and the input image is smoothed using curvature anisotropic diffusion filtering. In the second step, the mean shape model is moved using thresholding and Euclidean distance transformation to obtain a coarse position in a test image, and then the initial mesh is locally and iteratively deformed to the coarse boundary, which is constrained to stay close to a subspace of shapes describing the anatomical variability. Finally, in order to accurately detect the liver surface, deformable graph cut was proposed, which effectively integrates the properties and inter-relationship of the input images and initialized surface. The proposed method was evaluated on 50 CT scan images, which are publicly available in two databases Sliver07 and 3Dircadb. The experimental results showed that the proposed method was effective and accurate for detection of the liver surface.
Abstract-Accurate lung tumor delineation plays an important role in radiotherapy treatment planning. Since the lung tumor has poor boundary in PET images and low contrast in CT images, segmentation of tumor in PET and CT images is a challenging task. In this study, we effectively integrate the two modalities by making fully use of the superior contrast of PET images and superior spatial resolution of CT images. Random walk and graph cut method are integrated to solve the segmentation problem, in which random walk is utilized as an initialization tool to provide object seeds for graph cut segmentation on PET and CT images. The co-segmentation problem is formulated as an energy minimization problem which is solved by max-flow/min-cut method. A graph includes two sub-graphs and a special link is constructed, in which one sub-graph is for PET and another is for CT, and the special link encodes a context term which penalizes the difference of the tumor segmentation on the two modalities. To fully utilize the characteristics of PET and CT images, a novel energy representation is devised. For PET, a downhill cost and a 3D derivative cost is proposed. For CT, a shape penalty cost is integrated into the region and boundary function which helps constrain the tumor location during the segmentation. We validate our algorithm on a dataset which consists of 18 PET-CT images. The experimental results indicate the proposed method is superior to the graph cut method solely using the PET or CT is more accurate compared with the random walk method, random walk co-segmentation method, and non-improved graph-cut method.Index Terms-Image segmentation, interactive segmentation, graph cut, random walks, prior information, lung tumor, positron emission tomography (PET), computed tomography (CT).
Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm , and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels.
Optical coherence tomography (OCT) provides not only morphological information but also information about layer-specific optical intensities, which may represent the underlying tissue properties. The purpose of this study is to quantitatively investigate the optical intensity of each retinal layers in central retinal artery occlusion (CRAO). Twenty-nine CRAO cases at acute phase and 33 normal controls were included. Macula-centered 3D OCT images were segmented with a fully-automated Iowa Reference Algorithm into 10 layers. Layer-specific mean intensities were determined and compared between the patient and control groups using multiple regression analysis while adjusting for age and optical intensity of the entire region. The optical intensities were higher in CRAO than in controls in layers spanning from the retinal ganglion cell layer to outer plexiform layer (standardized beta = 0.657 to 0.777, all p < 0.001), possibly due to ischemia. Optical intensities were lower at the photoreceptor, retinal pigment epithelium (RPE), and choroid layers (standardized beta = −0.412 to −0.611, all p < 0.01), possibly due to shadowing effects. Among the intraretinal layers, the inner nuclear layer was identified as the best indicator of CRAO. Our study provides in vivo information of the optical intensity changes in each retinal layer in CRAO patients.
Optical Coherence Tomography (OCT) is becoming one of the most important modalities for the noninvasive assessment of retinal eye diseases. As the number of acquired OCT volumes increases, automating the OCT image analysis is becoming increasingly relevant. In this paper, we propose a surrogate-assisted classification method to classify retinal OCT images automatically based on convolutional neural networks (CNNs). Image denoising is first performed to reduce the noise. Thresholding and morphological dilation are applied to extract the masks. The denoised images and the masks are then employed to generate a lot of surrogate images, which are used to train the CNN model. Finally, The prediction for a test image is determined by the average of the outputs from the trained CNN model on the surrogate images. The proposed method has been evaluated on different databases. The results (AUC of 0.9783 in the local database and AUC of 0.9856 in the Duke database) show that the proposed method is a very promising tool for classifying the retinal OCT images automatically.
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorioretinal diseases, which can cause loss of central vision. In this paper, an automated framework is proposed to segment serous PED in SD-OCT images. The proposed framework consists of four main steps: first, a multi-scale graph search method is applied to segment abnormal retinal layers; second, an effective AdaBoost method is applied to refine the initial segmented regions based on 62 extracted features; third, a shape-constrained graph cut method is applied to segment serous PED, in which the foreground and background seeds are obtained automatically; finally, an adaptive structure elements based morphology method is applied to remove false positive segmented regions. The proposed framework was tested on 25 SD-OCT volumes from 25 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 90.08%, 0.22%, 91.20% and 92.62%, respectively. The proposed framework can provide clinicians with accurate quantitative information, including shape, size and position of the PED region, which can assist clinical diagnosis and treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.