PurposeTo propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans.MethodsThe improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis.ResultsThe MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm).ConclusionThe pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.
Objective To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. Methods The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. Results The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. Conclusions High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.
Objective: To generate virtual non-contrast (VNC) computed tomography (CT) from intravenous enhanced CT through convolutional neural networks (CNN) and compare calculated dose among enhanced CT, VNC, and real non-contrast scanning. Method: 50 patients who accepted non-contrast and enhanced CT scanning before and after intravenous contrast agent injections were selected, and two sets of CT images were registered. A total of 40 and 10 groups were used as training and test datasets, respectively. The U-Net architecture was applied to learn the relationship between the enhanced and non-contrast CT. VNC images were generated in the test through the trained U-Net. The CT values of non-contrast, enhanced and VNC CT images were compared. The radiotherapy treatment plans for esophageal cancer were designed, and dose calculation was performed. Dose distributions in the three image sets were compared. Results: The mean absolute error of CT values between enhanced and non-contrast CT reached 32.3 ± 2.6 HU, and that between VNC and non-contrast CT totaled 6.7 ± 1.3 HU. The average CT values in enhanced CT of great vessels, heart, lungs, liver, and spinal cord were all significantly higher than those of non-contrast CT (p < 0.05), with the differences reaching 97, 83, 42, 40, and 10 HU, respectively. The average CT values of the organs in VNC CT showed no significant differences from those in non-contrast CT. The relative dose differences of the enhanced and non-contrast CT were −1.2, −1.3, −2.1, and −1.5% in the comparison of mean doses of planned target volume, heart, great vessels, and lungs, respectively. The mean dose calculated by VNC CT showed no significant difference from that by non-contrast CT. The average γ passing rate (2%, 2 mm) of VNC CT image was significantly higher than that of enhanced CT image (0.996 vs. 0.973, p < 0.05). Liugang et al. Non-contrast CT Generated From Enhanced Conclusion: Designing a treatment plan based on enhanced CT will enlarge the dose calculation uncertainty in radiotherapy. This paper proposed the generation of VNC CT images from enhanced CT images based on U-Net architecture. The dose calculated through VNC CT images was identical with that obtained through real non-contrast CT.
Purpose Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct a deep convolutional neural network that can adaptively fuse the information of multiple tumoral‐regions (FMRNet) for breast tumor classification using ultrasound (US) images without segmentation masks in the testing stage. Methods To sufficiently excavate the potential relationship, we design a fused network and two independent modules to extract and fuse features of multiple regions simultaneously. First, we introduce two enhanced combined‐tumoral (EC) region modules, aiming to enhance the combined‐tumoral features gradually. Then, we further design a three‐branch module for extracting and fusing the features of intratumoral, peritumoral, and combined‐tumoral regions, denoted as the intratumoral, peritumoral, and combined‐tumoral module. Especially, we design a novel fusion module by introducing a channel attention module to adaptively fuse the features of three regions. The model is evaluated on two public datasets including UDIAT and BUSI with breast tumor ultrasound images. Two independent groups of experiments are performed on two respective datasets using the fivefold stratified cross‐validation strategy. Finally, we conduct ablation experiments on two datasets, in which BUSI is used as the training set and UDIAT is used as the testing set. Results We conduct detailed ablation experiments about the proposed two modules and comparative experiments with other existing representative methods. The experimental results show that the proposed method yields state‐of‐the‐art performance on both two datasets. Especially, in the UDIAT dataset, the proposed FMRNet achieves a high accuracy of 0.945 and a specificity of 0.945, respectively. Moreover, the precision (PRE = 0.909) even dramatically improves by 21.6% on the BUSI dataset compared with the existing method of the best result. Conclusion The proposed FMRNet shows good performance in breast tumor classification with US images, and proves its capability of exploiting and fusing the information of multiple tumoral‐regions. Furthermore, the FMRNet has potential value in classifying other types of cancers using multiple tumoral‐regions of other kinds of medical images.
In modern radiotherapy, error reduction in the patients’ daily setup error is important for achieving accuracy. In our study, we proposed a new approach for the development of an assist system for the radiotherapy position setup by using augmented reality (AR). We aimed to improve the accuracy of the position setup of patients undergoing radiotherapy and to evaluate the error of the position setup of patients who were diagnosed with head and neck cancer, and that of patients diagnosed with chest and abdomen cancer. We acquired the patient's simulation CT data for the three‐dimensional (3D) reconstruction of the external surface and organs. The AR tracking software detected the calibration module and loaded the 3D virtual model. The calibration module was aligned with the Linac isocenter by using room lasers. And then aligned the virtual cube with the calibration module to complete the calibration of the 3D virtual model and Linac isocenter. Then, the patient position setup was carried out, and point cloud registration was performed between the patient and the 3D virtual model, such the patient's posture was consistent with the 3D virtual model. Twenty patients diagnosed with head and neck cancer and 20 patients diagnosed with chest and abdomen cancer in the supine position setup were analyzed for the residual errors of the conventional laser and AR‐guided position setup. Results show that for patients diagnosed with head and neck cancer, the difference between the two positioning methods was not statistically significant (P > 0.05). For patients diagnosed with chest and abdomen cancer, the residual errors of the two positioning methods in the superior and inferior direction and anterior and posterior direction were statistically significant (t = −5.80, −4.98, P < 0.05). The residual errors in the three rotation directions were statistically significant (t = −2.29 to −3.22, P < 0.05). The experimental results showed that the AR technology can effectively assist in the position setup of patients undergoing radiotherapy, significantly reduce the position setup errors in patients diagnosed with chest and abdomen cancer, and improve the accuracy of radiotherapy.
Magnetic resonance imaging (MRI) plays an important role in clinical diagnosis, but it is susceptible to metal artifacts. The generative adversarial network GatedConv with gated convolution (GC) and contextual attention (CA) was used to inpaint the metal artifact region in MRI images. Methods: MRI images containing or near the teeth of 70 patients were collected, and the scanning sequence was a T1-weighted high-resolution isotropic volume examination sequence. A total of 10 000 slices were obtained after data enhancement, of which 8000 slices were used for training. MRI images were normalized to [−1,1]. Based on the randomly generated mask, U-Net, pix2pix, PConv with partial convolution, and GatedConv were used to inpaint the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. The inpainting effect on the test dataset using dental masks was also evaluated. Besides, the artifact area of clinical MRI images was inpainted based on the mask sketched by physicians. Finally, the earring artifacts and artifacts caused by abnormal signal foci were inpainted to verify the generalization of the models. Results: GatedConv could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the results of U-Net, pix2pix, PConv, and GatedConv, the masked MAEs were 0.1638, 0.1812, 0.1688, and 0.1596, respectively, and the masked PSNRs were 18.2136, 17.5692, 18.2258, and 18.3035 dB, respectively. Using dental masks, the results of U-Net, pix2pix, and PConv differed more from the real images in terms of alveolar shape and surrounding tissue compared with GatedConv. GatedConv could inpaint the metal artifact region in clinical MRI images more effectively than the other models, but the increase in the mask area could reduce the inpainting effect. Inpainted MRI images by GatedConv and CT images with metal artifact reduction coincided with alveolar and tissue structure, and GatedConv could successfully inpaint artifacts caused by abnormal signal foci, whereas the other models failed. The ablation study demonstrated that GC and CA increased the reliability of the inpainting performance of GatedConv. Conclusion: MRI images are affected by metal, and signal void areas appear near metal. GatedConv can inpaint the MRI metal artifact region in the image domain directly and effectively and improve image quality. Medical image inpainting by GatedConv has potential value for tasks,such as positron emission tomography (PET) attenuation correction in PET/MRI and adaptive radiotherapy of synthetic CT based on MRI.Kai Xie and Liugang Gao equal contribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.