PurposeCurrent clinical application of cone‐beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT‐based adaptive planning presently impractical. In this study, we developed a deep‐learning‐based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT‐guided pancreatic adaptive radiotherapy.MethodsThirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self‐attention cycle generative adversarial network (cycleGAN) was used to generate CBCT‐based sCT. For the cohort of 30 patients, the CT‐based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison.ResultsAt the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose‐volume‐histogram (DVH) metrics between the CT‐ and sCT‐based plans, while significant differences (P < 0.05) were found between the CT‐ and the CBCT‐based plans.ConclusionsThe image similarity and dosimetric agreement between the CT and sCT‐based plans validated the dose calculation accuracy carried by sCT. The CBCT‐based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
Purpose: Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter-and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. Methods: We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. Results: The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 AE 0.04, a mean surface distance (MSD) of 1.79 AE 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 AE 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 AE 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 AE 0.05, MSD of 1.02 AE 0.35 mm, 95% HD of 9.50 AE 5.11 mm, and aRVD of 8.93 AE 7.56. Conclusion: We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
BACKGROUND As systemic therapy has improved for locally advanced pancreatic cancer (LAPC), efforts to improve local control with optimal radiotherapy may be critical. Although conventionally fractionated radiation therapy (CFRT) has more recently shown a limited role in LAPC, stereotactic body radiation therapy (SBRT) is an emerging approach with promising results. With no studies to date comparing SBRT with CFRT for LAPC, this study used the National Cancer Data Base (NCDB) to evaluate these 2 modalities. METHODS With the NCDB, patients with American Joint Committee on Cancer cT2-4/N0-1/M0 adenocarcinoma of the pancreas diagnosed from 2004 to 2013 were analyzed. Radiation therapy delivered at ≤2 Gy was deemed CFRT, and radiation therapy delivered at ≥4 Gy per fraction was considered SBRT. Kaplan-Meier analysis, log-rank testing, and multivariate Cox proportional hazards regression were performed with overall survival (OS) as the primary outcome. Propensity score matching was used. RESULTS Among 8450 patients, 7819 (92.5%) were treated with CFRT, and 631 (7.5%) underwent SBRT. Receipt of SBRT was associated with superior OS in the multivariate analysis (hazard ratio, 0.84; 95% confidence interval, 0.75–0.93; P<.001). With propensity score matching, 988 patients in all were matched, with 494 patients in each cohort. Within the propensity-matched cohorts, the median OS (13.9 vs 11.6 months) and the 2-year OS rate (21.7% vs 16.5%) were significantly higher with SBRT versus CFRT (P=.0014). CONCLUSIONS In this retrospective review using a large national database, SBRT was associated with superior OS in comparison with CFRT for LAPC, and these findings remained significant in a propensity-matched analysis. Further prospective studies investigating these hypothesis-generating results are warranted.
Purpose: Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter-and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. Methods and materials: We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. Results: Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 AE 0.03, 3.94 AE 1.55, 0.60 AE 0.23, and 0.90 AE 0.38 mm, respectively. Conclusion: We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Purpose-To develop an automated cone-beam computed tomography (CBCT) multi-organ segmentation method for potential CBCT-guided adaptive radiation therapy workflow.Methods and materials-The proposed method combines the deep leaning-based image synthesis method, which generates magnetic resonance images (MRIs) with superior soft-tissue contrast from on-board setup CBCT images to aid CBCT segmentation, with a deep attention strategy, which focuses on learning discriminative features for differentiating organ margins. The whole segmentation method consists of 3 major steps. First, a cycle-consistent adversarial network (CycleGAN) was used to estimate a synthetic MRI (sMRI) from CBCT images. Second, a deep attention network was trained based on sMRI and its corresponding manual contours. Third, the segmented contours for a query patient was obtained by feeding the patient's CBCT images into the trained sMRI estimation and segmentation model. In our retrospective study, we included 100 prostate cancer patients, each of whom has CBCT acquired with prostate, bladder and rectum contoured by physicians with MRI guidance as ground truth. We trained and tested our model with separate datasets among these patients. The resulting segmentations were compared with physicians' manual contours.Results-The Dice similarity coefficient and mean surface distance indices between our segmented and physicians' manual contours (bladder, prostate, and rectum) were 0.95±0.02, 0.44±0.22 mm, 0.86±0.06, 0.73±0.37 mm, and 0.91±0.04, 0.72±0.65 mm, respectively. Conclusion-We have proposed a novel CBCT-only pelvic multi-organ segmentation strategy using CBCT-based sMRI and validated its accuracy against manual contours. This technique could provide accurate organ volume for treatment planning without requiring MR images acquisition, greatly facilitating routine clinical workflow.
Thoracic radiation with protons is associated with better survival in this retrospective analysis; further validation in the randomized setting is needed to account for any imbalances in patient characteristics, including positron emission tomography-computed tomography staging.
Purpose Accurate segmentation of the prostate on computed tomography (CT) for treatment planning is challenging due to CT's poor soft tissue contrast. Magnetic resonance imaging (MRI) has been used to aid prostate delineation, but its final accuracy is limited by MRI‐CT registration errors. We developed a deep attention‐based segmentation strategy on CT‐based synthetic MRI (sMRI) to deal with the CT prostate delineation challenge without MRI acquisition. Methods and materials We developed a prostate segmentation strategy which employs an sMRI‐aided deep attention network to accurately segment the prostate on CT. Our method consists of three major steps. First, a cycle generative adversarial network was used to estimate an sMRI from CT images. Second, a deep attention fully convolution network was trained based on sMRI and the prostate contours deformed from MRIs. Attention models were introduced to pay more attention to prostate boundary. The prostate contour for a query patient was obtained by feeding the patient's CT images into the trained sMRI generation model and segmentation model. Results The segmentation technique was validated with a clinical study of 49 patients by leave‐one‐out experiments and validated with an additional 50 patients by hold‐out test. The Dice similarity coefficient, Hausdorff distance, and mean surface distance indices between our segmented and deformed MRI‐defined prostate manual contours were 0.92 ± 0.09, 4.38 ± 4.66, and 0.62 ± 0.89 mm, respectively, with leave‐one‐out experiments, and were 0.91 ± 0.07, 4.57 ± 3.03, and 0.62 ± 0.65 mm, respectively, with hold‐out test. Conclusions We have proposed a novel CT‐only prostate segmentation strategy using CT‐based sMRI, and validated its accuracy against the prostate contours that were manually drawn on MRI images and deformed to CT images. This technique could provide accurate prostate volume for treatment planning without requiring MRI acquisition, greatly facilitating the routine clinical workflow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.