Purpose Accurate segmentation of the prostate on computed tomography (CT) for treatment planning is challenging due to CT's poor soft tissue contrast. Magnetic resonance imaging (MRI) has been used to aid prostate delineation, but its final accuracy is limited by MRI‐CT registration errors. We developed a deep attention‐based segmentation strategy on CT‐based synthetic MRI (sMRI) to deal with the CT prostate delineation challenge without MRI acquisition. Methods and materials We developed a prostate segmentation strategy which employs an sMRI‐aided deep attention network to accurately segment the prostate on CT. Our method consists of three major steps. First, a cycle generative adversarial network was used to estimate an sMRI from CT images. Second, a deep attention fully convolution network was trained based on sMRI and the prostate contours deformed from MRIs. Attention models were introduced to pay more attention to prostate boundary. The prostate contour for a query patient was obtained by feeding the patient's CT images into the trained sMRI generation model and segmentation model. Results The segmentation technique was validated with a clinical study of 49 patients by leave‐one‐out experiments and validated with an additional 50 patients by hold‐out test. The Dice similarity coefficient, Hausdorff distance, and mean surface distance indices between our segmented and deformed MRI‐defined prostate manual contours were 0.92 ± 0.09, 4.38 ± 4.66, and 0.62 ± 0.89 mm, respectively, with leave‐one‐out experiments, and were 0.91 ± 0.07, 4.57 ± 3.03, and 0.62 ± 0.65 mm, respectively, with hold‐out test. Conclusions We have proposed a novel CT‐only prostate segmentation strategy using CT‐based sMRI, and validated its accuracy against the prostate contours that were manually drawn on MRI images and deformed to CT images. This technique could provide accurate prostate volume for treatment planning without requiring MRI acquisition, greatly facilitating the routine clinical workflow.
Purpose: Because the manual contouring process is labor-intensive and time-consuming, segmentation of organs-at-risk (OARs) is a weak link in radiotherapy treatment planning process. Our goal was to develop a synthetic MR (sMR)-aided dual pyramid network (DPN) for rapid and accurate head and neck multi-organ segmentation in order to expedite the treatment planning process. Methods: Forty-five patients' CT, MR, and manual contours pairs were included as our training dataset. Nineteen OARs were target organs to be segmented. The proposed sMR-aided DPN method featured a deep attention strategy to effectively segment multiple organs. The performance of sMRaided DPN method was evaluated using five metrics, including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and volume difference. Our method was further validated using the 2015 head and neck challenge data. Results: The contours generated by the proposed method closely resemble the ground truth manual contours, as evidenced by encouraging quantitative results in terms of DSC using the 2015 head and neck challenge data. Mean DSC values of 0.
Purpose: Segmentation of organs-at-risk (OARs) is a weak link in radiotherapeutic treatment planning process because the manual contouring action is labor-intensive and time-consuming. This work aimed to develop a deep learning-based method for rapid and accurate pancreatic multi-organ segmentation that can expedite the treatment planning process. Methods: We retrospectively investigated one hundred patients with computed tomography (CT) simulation scanned and contours delineated. Eight OARs including large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach were the target organs to be segmented. The proposed three-dimensional (3D) deep attention U-Net is featured with a deep attention strategy to effectively differentiate multiple organs. Performance of the proposed method was evaluated using six metrics, including Dice similarity coefficient (DSC), sensitivity, specificity, Hausdorff distance 95% (HD95), mean surface distance (MSD) and residual mean square distance (RMSD). Results: The contours generated by the proposed method closely resemble the ground-truth manual contours, as evidenced by encouraging quantitative results in terms of DSC, sensitivity, specificity, HD95, MSD and RMSD. For DSC, mean values of 0.
Electrochromic materials have great application in soft displays and devices, but the application of ideal electrochromic textiles still faces challenges owing to the inconvenience of a continuous power supply. Here, electrochromic color-memory microcapsules (ECM-Ms-red, -yellow, and -blue) with a low drive voltage (2.0 V), coloration efficiency (921.6 cm2 C–1), a practical response rate (34.4 s–1), multistage response discoloration, and good color-memory performance (>72 h) and reversibility (≥1000 cycles) are developed. The color-memory performance is controlled by the energy difference of oxidation–reduction reactions. A multicolor and multistage response electrochromic color-memory wearable smart textile and flexible display are developed that are convenient and energy-efficient for application. The design philosophy of color-memory based on controllable energy difference of reactions has great potential application in sensors and smart textiles.
Purpose: Stereotactic radiosurgery (SRS) is widely used to obliterate arteriovenous malformations (AVMs). Its performance relies on the accuracy of delineating the target AVM. Manual segmentation during a framed SRS procedure is time consuming and subject to inter-and intraobserver variation. To address these drawbacks, we proposed a deep learning-based method to automatically segment AVMs on CT simulation image sets. Methods: We developed a deep learning-based method using a deeply supervised three-dimensional (3D) V-Net with a compound loss function. A 3D supervision mechanism was integrated into a residual network, V-Net, to deal with the optimization difficulties when training deep networks with limited training data. The proposed compound loss function including logistic and Dice losses encouraged similarity and penalized discrepancy simultaneously between prediction and training dataset; this was utilized to supervise the 3D V-Net at different stages. To evaluate the accuracy of segmentation, we retrospectively investigated 80 AVM patients who had CT simulation and digital subtraction angiography (DSA) acquired prior to treatment. The AVM target volume was segmented by our proposed method. They were compared with clinical contours approved by physicians with regard to Dice overlapping, difference in volume and centroid, and dose coverage changes on original plan. Results: Contours created by the proposed method demonstrated very good visual agreement to the ground truth contours. The mean Dice similarity coefficient (DSC), sensitivity and specificity of the contours delineated by our method were >0.85 among all patients. The mean centroid distance between our results and ground truth was 0.675 AE 0.401 mm, and was not significantly different in any of the three orthogonal directions. The correlation coefficient between ground truth and AVM volume resulting from the proposed method was 0.992 with statistical significance. The mean volume difference among all patients was 0.076 AE 0.728 cc; there was no statistically significant difference. The average differences in dose metrics were all less than 0.2 Gy, with standard deviation less than 1 Gy. No statistically significant differences were observed in any of the dose metrics. Conclusion: We developed a novel, deeply supervised, deep learning-based approach to automatically segment the AVM volume on CT images. We demonstrated its clinical feasibility by validating the shape and positional accuracy, and dose coverage of the automatic volume. These results demonstrate the potential of a learning-based segmentation method for delineating AVMs in the clinical setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.