Objective: To generate synthetic CT (sCT) images with high quality from CBCT and planning CT (pCT) for dose calculation by using deep learning methods. Methods: 169 NPC patients with a total of 20926 slices of CBCT and pCT images were included. In this study the CycleGAN, Pix2pix and U-Net models were used to generate the sCT images. The Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index (SSIM) were used to quantify the accuracy of the proposed models in a testing cohort of 34 patients. Radiation dose were calculated on pCT and sCT following the same protocol. Dose distributions were evaluated for 4 patients by comparing the dose-volume-histogram (DVH) and 2D gamma index analysis. Results: The average MAE and RMSE values between sCT by three models and pCT reduced by 15.4 HU and 26.8 HU at least, while the mean PSNR and SSIM metrics between sCT by different models and pCT added by 10.6 and 0.05 at most, respectively. There were only slight differences for DVH of selected contours between different plans. The passing rates of 2D gamma index analysis under 3 mm/3% 3 mm/2%, 2 mm/3%and 2 mm/2% criteria were all higher than 95%. Conclusions: All the sCT had achieved better evaluation metrics than those of original CBCT, while the performance of CycleGAN model was proved to be best among three methods. The dosimetric agreement confirmed the HU accuracy and consistent anatomical structures of sCT by deep learning methods.
Background: Accurate segmentation of tumor targets is critical for maximizing tumor control and minimizing normal tissue toxicity. We proposed a sequential and iterative U-Net (SI-Net) deep learning method to auto-segment the high-risk primary tumor clinical target volume (CTVp1) for treatment planning of nasopharyngeal carcinoma (NPC) radiotherapy. Methods: The SI-Net is a variant of the U-Net architecture. The input of SI-Net includes one CT image, the CTVp1 contour on this image, and the next CT image. The output is the predicted CTVp1 contour on the next CT image. We designed the SI-Net, using the left side to learn the volumetric features and the right to localize the contour on the next image. Two prediction directions, one from inferior to superior (forward direction) and the other from superior to inferior (backward direction), were tested. The performance was compared between the SI-Net and the U-Net using Dice similarity coefficient (DSC), Jaccard index (JI), average surface distance (ASD), and Hausdorff distance (HD) metrics. Results: The DSC and JI values from the forward direction SI-Net model were 5 and 6% higher than those from the U-Net model (0.84 ± 0.04 vs. 0.80 ± 0.05 and 0.74 ± 0.05 vs. 0.69 ± 0.05, p < 0.001). The smaller ASD and HD values also indicated a better performance (2.8 ± 1.0 vs. 3.3 ± 1.0 mm and 8.7 ± 2.5 vs. 9.7 ± 2.7 mm, p < 0.01) for the SI-Net model. For the backward direction SI-Net model, the DSC and JI values were still better than those from the U-Net model ( p < 0.01), although there were no significant differences in ASD and HD. Conclusions: The SI-Net model preserved the continuity between adjacent images and thus improved the segmentation accuracy compared with the conventional U-Net model. This model has potential of improving the efficiency and consistence of CTVp1 contouring for NPC patients.
Purpose Radiation therapy is an essential treatment modality for cervical cancer, while accurate and efficient segmentation methods are needed to improve the workflow. In this study, a three‐dimensional V‐net model is proposed to automatically segment clinical target volume (CTV) and organs at risk (OARs), and to provide prospective guidance for low lose area. Material and methods A total of 130 CT datasets were included. Ninety cases were randomly selected as the training data, with 10 cases used as the validation data, and the remaining 30 cases as testing data. The V‐net model was implemented with Tensorflow package to segment the CTV and OARs, as well as regions of 5 Gy, 10 Gy, 15 Gy, and 20 Gy isodose lines covered. The auto‐segmentation by V‐net was compared to auto‐segmentation by U‐net. Four representative parameters were calculated to evaluate the accuracy of the delineation, including Dice similarity coefficients (DSC), Jaccard index (JI), average surface distance (ASD), and Hausdorff distance (HD). Results The V‐net and U‐net achieved the average DSC value for CTV of 0.85 and 0.83, average JI values of 0.77 and 0.75, average ASD values of 2.58 and 2.26, average HD of 11.2 and 10.08, respectively. As for the OARs, the performance of the V‐net model in the colon was significantly better than the U‐net model (p = 0.046), and the performance in the kidney, bladder, femoral head, and pelvic bones were comparable to the U‐net model. For prediction of low‐dose areas, the average DSC of the patients’ 5 Gy dose area in the test set were 0.88 and 0.83, for V‐net and U‐net, respectively. Conclusions It is feasible to use the V‐Net model to automatically segment cervical cancer CTV and OARs to achieve a more efficient radiotherapy workflow. In the delineation of most target areas and OARs, the performance of V‐net is better than U‐net. It also offers advantages with its feature of predicting the low‐dose area prospectively before radiation therapy (RT).
PurposeAccurate segmentation of liver and liver tumors is critical for radiotherapy. Liver tumor segmentation, however, remains a difficult and relevant problem in the field of medical image processing because of the various factors like complex and variable location, size, and shape of liver tumors, low contrast between tumors and normal tissues, and blurred or difficult-to-define lesion boundaries. In this paper, we proposed a neural network (S-Net) that can incorporate attention mechanisms to end-to-end segmentation of liver tumors from CT images.MethodsFirst, this study adopted a classical coding-decoding structure to realize end-to-end segmentation. Next, we introduced an attention mechanism between the contraction path and the expansion path so that the network could encode a longer range of semantic information in the local features and find the corresponding relationship between different channels. Then, we introduced long-hop connections between the layers of the contraction path and the expansion path, so that the semantic information extracted in both paths could be fused. Finally, the application of closed operation was used to dissipate the narrow interruptions and long, thin divide. This eliminated small cavities and produced a noise reduction effect.ResultsIn this paper, we used the MICCAI 2017 liver tumor segmentation (LiTS) challenge dataset, 3DIRCADb dataset and doctors’ manual contours of Hubei Cancer Hospital dataset to test the network architecture. We calculated the Dice Global (DG) score, Dice per Case (DC) score, volumetric overlap error (VOE), average symmetric surface distance (ASSD), and root mean square error (RMSE) to evaluate the accuracy of the architecture for liver tumor segmentation. The segmentation DG for tumor was found to be 0.7555, DC was 0.613, VOE was 0.413, ASSD was 1.186 and RMSE was 1.804. For a small tumor, DG was 0.3246 and DC was 0.3082. For a large tumor, DG was 0.7819 and DC was 0.7632.ConclusionS-Net obtained more semantic information with the introduction of an attention mechanism and long jump connection. Experimental results showed that this method effectively improved the effect of tumor recognition in CT images and could be applied to assist doctors in clinical treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.