Cone beam computed tomography (CBCT) images can be used for dose calculation in adaptive radiation therapy (ART). The main challenges are the large artefacts and inaccurate Hounsfield unit (HU) values. Currently, deformed planning CT images are often used for this purpose, although anatomical accuracy might be a concern. Ideally, we would like to convert CBCT images to CT images with artifacts removed or greatly reduced and HU values corrected while keeping the anatomical accuracy. Recently, deep learning has achieved great success in image-toimage translation tasks. It is very difficult to acquire paired CT and CBCT images with exactly matching anatomy for supervised training. To overcome this limitation, we developed and tested a cycle generative adversarial network (CycleGAN) which is an unsupervised learning method and does not require paired training datasets to synthesize CT images from CBCT images . The synthesized CT (sCT) images have been compared with the deformed planning CT (dpCT) showing visual and quantitative similarity with artifacts being removed and HU value errors being reduced from 71.78 HU to 27.98 HU. Dose calculation accuracy using sCT images has been improved over the original CBCT images, with the average Gamma Index passing rate increased from 95.4% to 97.4% for 1 mm/1% criteria. A deformable phantom study has been conducted and demonstrated better anatomical accuracy for sCT over dpCT.
The treatment planning process for patients with head and neck (H&N) cancer is regarded as one of the most complicated due to large target volume, multiple prescription dose levels, and many radiation-sensitive critical structures near the target. Treatment planning for this site requires a high level of human expertise and a tremendous amount of effort to produce personalized high quality plans, taking as long as a week, which deteriorates the chances of tumor control and patient survival. To solve this problem, we propose to investigate a deep learning-based dose prediction model, Hierarchically Densely Connected U-net, based on two highly popular network architectures: U-net and DenseNet. We find that this new architecture is able to accurately and efficiently predict the dose distribution, outperforming the other two models, the Standard U-net and DenseNet, in homogeneity, dose conformity, and dose coverage on the test data. Averaging across all organs at risk, our proposed model is capable of predicting the organ-at-risk max dose within 6.3% and mean dose within 5.1% of the prescription dose on the test data. The other models, the Standard U-net and DenseNet, performed worse, having an averaged organ-at-risk max dose prediction error of 8.2% and 9.3%, respectively, and averaged mean dose prediction error of 6.4% and 6.8%, respectively. In addition, our proposed model used 12 times less trainable parameters than the Standard U-net, and predicted the patient dose 4 times faster than DenseNet.reduce the vanishing gradient issue, and decrease the number of trainable parameters needed. While the term "densely connected" was historically used to described fully connected neural network layers, this publication by Huang et al. had adopted this terminology to describe how his convolutional layers were connected. While requiring more memory to use, the authors showed that the DenseNet was capable of achieving a better performance while having far less parameters in the neural network. For example, they were able to have comparable accuracy with ResNet, which had 10 million parameters, using their DenseNet, which had 0.8M parameters. This indicates that DenseNet is far more efficient in feature calculation than existing network architectures. For its contribution to the AI community, the DenseNet publication was awarded for the CVPR 2017 best publication. However, it is recognized that DenseNet, while efficient in parameter usage, actually utilizes considerably more GPU RAM, rendering a 3D U-net with fully densely connected convolutional connections infeasible for today's current GPU technologies.Motivated by a 3D densely connected U-net, but requiring less memory usage, we developed a neural network architecture that combines the essence of these two influential neural network architectures into our proposed network while maintaining a respectable RAM usage, which we call Hierarchically Densely Connected U-net (HD U-net). The term "hierarchically" is used here to describe the different levels of resolution in the U-n...
With the advancement of treatment modalities in radiation therapy for cancer patients, outcomes have improved, but at the cost of increased treatment plan complexity and planning time. The accurate prediction of dose distributions would alleviate this issue by guiding clinical plan optimization to save time and maintain high quality plans. We have modified a convolutional deep network model, U-net (originally designed for segmentation purposes), for predicting dose from patient image contours of the planning target volume (PTV) and organs at risk (OAR). We show that, as an example, we are able to accurately predict the dose of intensity-modulated radiation therapy (IMRT) for prostate cancer patients, where the average Dice similarity coefficient is 0.91 when comparing the predicted vs. true isodose volumes between 0% and 100% of the prescription dose. The average value of the absolute differences in [max, mean] dose is found to be under 5% of the prescription dose, specifically for each structure is [1.80%, 1.03%](PTV), [1.94%, 4.22%](Bladder), [1.80%, 0.48%](Body), [3.87%, 1.79%](L Femoral Head), [5.07%, 2.55%](R Femoral Head), and [1.26%, 1.62%](Rectum) of the prescription dose. We thus managed to map a desired radiation dose distribution from a patient’s PTV and OAR contours. As an additional advantage, relatively little data was used in the techniques and models described in this paper.
Accurate segmentation of prostate and surrounding organs at risk is important for prostate cancer radiotherapy treatment planning. We present a fully automated workflow for male pelvic CT image segmentation using deep learning. The architecture consists of a 2D localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. We used a multi-channel 2D U-Net followed by a 3D U-Net with encoding arm modified with aggregated residual networks, known as ResNeXt. The models were trained and tested on a pelvic CT image dataset comprising 136 patients. Test results show that 3D U-Net based segmentation achieves mean (±SD) Dice coefficient values of 90 (±2.0)% ,96 (±3.0)%, 95 (±1.3)%, 95 (±1.5)%, and 84 (±3.7)% for prostate, left femoral head, right femoral head, bladder, and rectum, respectively, using the proposed fully automated segmentation method.
Purpose The use of neural networks to directly predict three‐dimensional dose distributions for automatic planning is becoming popular. However, the existing methods use only patient anatomy as input and assume consistent beam configuration for all patients in the training database. The purpose of this work was to develop a more general model that considers variable beam configurations in addition to patient anatomy to achieve more comprehensive automatic planning with a potentially easier clinical implementation, without the need to train specific models for different beam settings. Methods The proposed anatomy and beam (AB) model is based on our newly developed deep learning architecture, and hierarchically densely connected U‐Net (HD U‐Net), which combines U‐Net and DenseNet. The AB model contains 10 input channels: one for beam setup and the other 9 for anatomical information (PTV and organs). The beam setup information is represented by a 3D matrix of the non‐modulated beam’s eye view ray‐tracing dose distribution. We used a set of images from 129 patients with lung cancer treated with IMRT with heterogeneous beam configurations (4–9 beams of various orientations) for training/validation (100 patients) and testing (29 patients). Mean squared error was used as the loss function. We evaluated the model’s accuracy by comparing the mean dose, maximum dose, and other relevant dose–volume metrics for the predicted dose distribution against those of the clinically delivered dose distribution. Dice similarity coefficients were computed to address the spatial correspondence of the isodose volumes between the predicted and clinically delivered doses. The model was also compared with our previous work, the anatomy only (AO) model, which does not consider beam setup information and uses only 9 channels for anatomical information. Results The AB model outperformed the AO model, especially in the low and medium dose regions. In terms of dose–volume metrics, AB outperformed AO by about 1–2%. The largest improvement was found to be about 5% in lung volume receiving a dose of 5Gy or more (V5). The improvement for spinal cord maximum dose was also important, that is, 3.6% for cross‐validation and 2.6% for testing. The AB model achieved Dice scores for isodose volumes as much as 10% higher than the AO model in low and medium dose regions and about 2–5% higher in high dose regions. Conclusions The AO model, which does not use beam configuration as input, can still predict dose distributions with reasonable accuracy in high dose regions but introduces large errors in low and medium dose regions for IMRT cases with variable beam numbers and orientations. The proposed AB model outperforms the AO model substantially in low and medium dose regions, and slightly in high dose regions, by considering beam setup information through a cumulative non‐modulated beam’s eye view ray‐tracing dose distribution. This new model represents a major step forward towards predicting 3D dose distributions in real clinical practices, where beam configu...
Purpose: This study assessed the dosimetric accuracy of synthetic CT images generated from magnetic resonance imaging (MRI) data for focal brain radiation therapy, using a deep learning approach. Material and Methods:We conducted a study in 77 patients with brain tumors who had undergone both MRI and computed tomography (CT) imaging as part of their simulation for external beam treatment planning. We designed a generative adversarial network (GAN) to generate synthetic CT images from MRI images. We used Mutual Information (MI) as the loss function in the generator to overcome the misalignment between MRI and CT images (unregistered data). The model was trained using all MRI slices with corresponding CT slices from each training subject's MRI/CT pair. Results:The proposed GAN method produced an average mean absolute error (MAE) of 47.2 ±11.0 HU over 5-fold cross validation. The overall mean Dice similarity coefficient between CT and synthetic CT images was 80% ± 6% in bone for all test data. Though training a GAN model may take several hours, the model only needs to be trained once. Generating a complete synthetic CT volume for each new patient MRI volume using a trained GAN model took only one second. Conclusions:The GAN model we developed produced highly accurate synthetic CT images from conventional, single-sequence MRI images in seconds. Our proposed method has strong potential to perform well in a clinical workflow for MRI-only brain treatment planning.
This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans.
BACKGROUND CONTEXT: Current literature suggests that degenerated or damaged vertebral endplates are a significant cause of chronic low back pain (LBP) that is not adequately addressed by standard care. Prior 2-year data from the treatment arm of a sham-controlled randomized controlled trial (RCT) showed maintenance of clinical improvements at 2 years following radiofrequency (RF) ablation of the basivertebral nerve (BVN).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.