An algorithm for delineating complex head and neck cancers in positron emission tomography (PET) images is presented in this article. An enhanced random walk (RW) algorithm with automatic seed detection is proposed and used to make the segmentation process feasible in the event of inhomogeneous lesions with bifurcations. In addition, an adaptive probability threshold and a k-means based clustering technique have been integrated in the proposed enhanced RW algorithm. The new threshold is capable of following the intensity changes between adjacent slices along the whole cancer volume, leading to an operator-independent algorithm. Validation experiments were first conducted on phantom studies: High Dice similarity coefficients, high true positive volume fractions, and low Hausdorff distance confirm the accuracy of the proposed method. Subsequently, forty head and neck lesions were segmented in order to evaluate the clinical feasibility of the proposed approach against the most common segmentation algorithms. Experimental results show that the proposed algorithm is more accurate and robust than the most common algorithms in the literature. Finally, the proposed method also shows real-time performance, addressing the physician's requirements in a radiotherapy environment.
Background: Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen.Purpose: This study compared different deep learning methods for whole-gland and zonal prostate segmentation. Study Type: Retrospective. Population: A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. Field strength/Sequence: A 3 T, TSE T 2 -weighted. Assessment: Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. Statistical Tests: Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. Results: The best DSC (P < 0.05) in the test set was achieved by ENet: 91% AE 4% for the whole gland, 87% AE 5% for the TZ, and 71% AE 8% for the PZ. U-net and ERFNet obtained, respectively, 88% AE 6% and 87% AE 6% for the whole gland, 86% AE 7% and 84% AE 7% for the TZ, and 70% AE 8% and 65 AE 8% for the PZ. Training and inference time were lowest for ENet. Data Conclusion: Deep learning networks can accurately segment the prostate using T 2 -weighted images. Evidence Level: 4 Technical Efficacy: Stage 2
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.