Abstract:BackgroundThis study aims to construct and validate a model based on convolutional neural networks (CNNs), which can fulfil the automatic segmentation of clinical target volumes (CTVs) of breast cancer for radiotherapy.MethodsIn this work, computed tomography (CT) scans of 110 patients who underwent modified radical mastectomies were collected. The CTV contours were confirmed by two experienced oncologists. A novel CNN was constructed to automatically delineate the CTV. Quantitative evaluation metrics were cal… Show more
“…Both the generator and discriminator of the cGAN observe the input CBCT image. In this study, we implement a full convolutional neural network (CNN) architecture 24 as the generator. In this CNN architecture, a U-Net backbone architecture is utilized; it consists of an encoding path and a decoding path.…”
Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy.
“…Both the generator and discriminator of the cGAN observe the input CBCT image. In this study, we implement a full convolutional neural network (CNN) architecture 24 as the generator. In this CNN architecture, a U-Net backbone architecture is utilized; it consists of an encoding path and a decoding path.…”
Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy.
“…The HD95 measure the 95 th quantile of the distribution of surface distances instead of the maximum. This metric is more robust towards outlier segmented pixels than the HD and thus is often used to evaluate automatic algorithms [38,30].…”
This paper presents an overview of the second edition of the HEad and neCK TumOR (HECKTOR) challenge, organized as a satellite event of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. The challenge is composed of three tasks related to the automatic analysis of PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the automatic segmentation of H&N primary Gross Tumor Volume (GTVt) in FDG-PET/CT images. Task 2 is the automatic prediction of Progression Free Survival (PFS) from the same FDG-PET/CT. Finally, Task 3 is the same as Task 2 with ground truth GTVt annotations provided to the participants. The data were collected from six centers for a total of 325 images, split into 224 training and 101 testing cases. The interest in the challenge was highlighted by the important participation with 103 registered teams and 448 result submissions. The best methods obtained a Dice Similarity Coefficient (DSC) of 0.7591 in the first task, and a Concordance index (C-index) of 0.7196 and 0.6978 in Tasks 2 and 3, respectively. In all tasks, simplicity of the approach was found to be key to ensure generalization performance. The comparison of the PFS prediction performance in Tasks 2 and 3 suggests that providing the GTVt contour was not crucial to achieve best results, which indicates that fully automatic methods can be used. This potentially obviates the need for GTVt contouring, opening avenues for reproducible and large scale radiomics studies including thousands potential subjects. equal contribution equal contribution
“…In our previous work, we have demonstrated that CNN architecture could facilitate the delineation of CTV for radical mastectomy radiotherapy. 30 In this piece of work, we focused on both CTV and OARs for breast conservative radiotherapy. We also made some modifications in CNN architecture.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.