Fully automated and volumetric segmentation of critical tumors may play a crucial role in diagnosis and surgical planning. One of the most challenging tumor segmentation tasks is localization of pancreatic ductal adenocarcinoma (PDAC). Exclusive application of conventional methods does not appear promising. Deep learning approaches has achieved great success in the computer aided diagnosis, especially in biomedical image segmentation. This paper introduces a framework based on convolutional neural network (CNN) for segmentation of PDAC mass and surrounding vessels in CT images by incorporating powerful classic features, as well. First, a 3D-CNN architecture is used to localize the pancreas region from the whole CT volume using 3D Local Binary Pattern (LBP) map of the original image. Segmentation of PDAC mass is subsequently performed using 2D attention U-Net and Texture Attention U-Net (TAU-Net). TAU-Net is introduced by fusion of dense Scale-Invariant Feature Transform (SIFT) and LBP descriptors into the attention U-Net. An ensemble model is then used to cumulate the advantages of both networks using a 3D-CNN. In addition, to reduce the effects of imbalanced data, a multi-objective loss function is proposed as a weighted combination of three classic losses including Generalized Dice Loss (GDL), Weighted Pixel-Wise Cross Entropy loss (WPCE) and boundary loss. Due to insufficient sample size for vessel segmentation, we used the above-mentioned pre-trained networks and fine-tuned them. Experimental results show that the proposed method improves the Dice score for PDAC mass segmentation in portal-venous phase by 7.52% compared to state-of-the-art methods in term of DSC. Besides, three dimensional visualization of the tumor and surrounding vessels can facilitate the evaluation of PDAC treatment response.
Quantifying the smoothness of different layers of the retina can potentially be an important and practical biomarker in various pathologic conditions like diabetic retinopathy. The purpose of this study is to develop an automated machine learning algorithm which uses support vector regression method with wavelet kernel and automatically segments two hyperreflective retinal layers (inner plexiform layer (IPL) and outer plexiform layer (OPL)) in 50 optical coherence tomography (OCT) slabs and calculates the smoothness index (SI). The Bland–Altman plots, mean absolute error, root mean square error and signed error calculations revealed a modest discrepancy between the manual approach, used as the ground truth, and the corresponding automated segmentation of IPL/ OPL, as well as SI measurements in OCT slabs. It was concluded that the constructed algorithm may be employed as a reliable, rapid and convenient approach for segmenting IPL/OPL and calculating SI in the appropriate layers.
Given the capacity of Optical Coherence Tomography (OCT) imaging to display structural changes in a wide variety of eye diseases and neurological disorders, the need for OCT image segmentation and the corresponding data interpretation is latterly felt more than ever before. In this paper, we wish to address this need by designing a semi-automatic software program for applying reliable segmentation of 8 different macular layers as well as outlining retinal pathologies such as diabetic macular edema. The software accommodates a novel graph-based semi-automatic method, called “Livelayer” which is designed for straightforward segmentation of retinal layers and fluids. This method is chiefly based on Dijkstra’s Shortest Path First (SPF) algorithm and the Live-wire function together with some preprocessing operations on the to-be-segmented images. The software is indeed suitable for obtaining detailed segmentation of layers, exact localization of clear or unclear fluid objects and the ground truth, demanding far less endeavor in comparison to a common manual segmentation method. It is also valuable as a tool for calculating the irregularity index in deformed OCT images. The amount of time (seconds) that Livelayer required for segmentation of Inner Limiting Membrane, Inner Plexiform Layer–Inner Nuclear Layer, Outer Plexiform Layer–Outer Nuclear Layer was much less than that for the manual segmentation, 5 s for the ILM (minimum) and 15.57 s for the OPL–ONL (maximum). The unsigned errors (pixels) between the semi-automatically labeled and gold standard data was on average 2.7, 1.9, 2.1 for ILM, IPL–INL, OPL–ONL, respectively. The Bland–Altman plots indicated perfect concordance between the Livelayer and the manual algorithm and that they could be used interchangeably. The repeatability error was around one pixel for the OPL–ONL and < 1 for the other two. The unsigned errors between the Livelayer and the manual algorithm was 1.33 for ILM and 1.53 for Nerve Fiber Layer–Ganglion Cell Layer in peripapillary B-Scans. The Dice scores for comparing the two algorithms and for obtaining the repeatability on segmentation of fluid objects were at acceptable levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.