Purpose The incorporation of cone‐beam computed tomography (CBCT) has allowed for enhanced image‐guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning‐based method for generating high quality corrected CBCT (CCBCT) images is proposed. Methods The proposed method integrates a residual block concept into a cycle‐consistent adversarial network (cycle‐GAN) framework, called res‐cycle GAN, to learn a mapping between CBCT images and paired planning CT images. Compared with a GAN, a cycle‐GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. A fully convolution neural network with residual blocks is used in the generator to enable end‐to‐end CBCT‐to‐CT transformations. The proposed algorithm was evaluated using 24 sets of patient data in the brain and 20 sets of patient data in the pelvis. The mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), normalized cross‐correlation (NCC) indices, and spatial non‐uniformity (SNU) were used to quantify the correction accuracy of the proposed algorithm. The proposed method is compared to both a conventional scatter correction and another machine learning‐based CBCT correction method. Results Overall, the MAE, PSNR, NCC, and SNU were 13.0 HU, 37.5 dB, 0.99, and 0.05 in the brain, 16.1 HU, 30.7 dB, 0.98, and 0.09 in the pelvis for the proposed method, improvements of 45%, 16%, 1%, and 93% in the brain, and 71%, 38%, 2%, and 65% in the pelvis, over the CBCT image. The proposed method showed superior image quality as compared to the scatter correction method, reducing noise and artifact severity. The proposed method produced images with less noise and artifacts than the comparison machine learning‐based method. Conclusions The authors have developed a novel deep learning‐based method to generate high‐quality corrected CBCT images. The proposed method increases onboard CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy.
Purpose Automated synthetic computed tomography (sCT) generation based on magnetic resonance imaging (MRI) images would allow for MRI‐only based treatment planning in radiation therapy, eliminating the need for CT simulation and simplifying the patient treatment workflow. In this work, the authors propose a novel method for generation of sCT based on dense cycle‐consistent generative adversarial networks (cycle GAN), a deep‐learning based model that trains two transformation mappings (MRI to CT and CT to MRI) simultaneously. Methods and materials The cycle GAN‐based model was developed to generate sCT images in a patch‐based framework. Cycle GAN was applied to this problem because it includes an inverse transformation from CT to MRI, which helps constrain the model to learn a one‐to‐one mapping. Dense block‐based networks were used to construct generator of cycle GAN. The network weights and variables were optimized via a gradient difference (GD) loss and a novel distance loss metric between sCT and original CT. Results Leave‐one‐out cross‐validation was performed to validate the proposed model. The mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and normalized cross correlation (NCC) indexes were used to quantify the differences between the sCT and original planning CT images. For the proposed method, the mean MAE between sCT and CT were 55.7 Hounsfield units (HU) for 24 brain cancer patients and 50.8 HU for 20 prostate cancer patients. The mean PSNR and NCC were 26.6 dB and 0.963 in the brain cases, and 24.5 dB and 0.929 in the pelvis. Conclusion We developed and validated a novel learning‐based approach to generate CT images from routine MRIs based on dense cycle GAN model to effectively capture the relationship between the CT and MRIs. The proposed method can generate robust, high‐quality sCT in minutes. The proposed method offers strong potential for supporting near real‐time MRI‐only treatment planning in the brain and pelvis.
In this work, we benchmark the equation of motion coupled cluster with single and double excitations (EOM-CCSD) method combined with the polarizable continuum model (PCM) for the calculation of electronic excitation energies of solvated molecules. EOM-CCSD is one of the most accurate methods for computing one-electron excitation energies, and accounting for the solvent effect on this property is a key challenge. PCM is one of the most widely employed solvation models due to its adaptability to virtually any solute and its efficient implementation with density functional theory methods (DFT). Our goal in this work is to evaluate the reliability of EOM-CCSD-PCM, especially compared to time-dependent DFT-PCM (TDDFT-PCM). Comparisons between calculated and experimental excitation energies show that EOM-CCSD-PCM consistently overestimates experimental results by 0.4-0.5 eV, which is larger than the expected EOM-CCSD error in vacuo. We attribute this decrease in accuracy to the approximated solvation model. Thus, we investigate a particularly important source of error: the lack of H-bonding interactions in PCM. We show that this issue can be addressed by computing an energy shift, Δ, from bare-PCM to microsolvation + PCM at DFT level. Our results show that such a shift is independent of the functional used, contrary to the absolute value of the excitation energy. Hence, we suggest an efficient protocol where the EOM-CCSD-PCM transition energy is corrected by Δ(DFT), which consistently improves the agreement with the experimental measurements.
Purpose: Treatment planning systems (TPSs) from different vendors can involve different implementations of Monte Carlo dose calculation (MCDC) algorithms for pencil beam scanning (PBS) proton therapy. There are currently no guidelines for validating non-water materials in TPSs. Furthermore, PBSspecific parameters can vary by 1-2 orders of magnitude among different treatment delivery systems (TDSs). This paper proposes a standardized framework on the use of commissioning data and steps to validate TDS-specific parameters and TPS-specific heterogeneity modeling to potentially reduce these uncertainties. Methods: A standardized commissioning framework was developed to commission the MCDC algorithms of RayStation 8A and Eclipse AcurosPT v13.7.20 using water and non-water materials. Measurements included Bragg peak depth-dose and lateral spot profiles and scanning field outputs for Varian ProBeam. The phase-space parameters were obtained from in-air measurements and the number of protons per MU from output measurements of 10 9 10 cm 2 square fields at a 2 cm depth. Spot profiles and various PBS field measurements at additional depths were used to validate TPS. Human tissues in TPS, Gammex phantom materials, and artificial materials were used for the TPS benchmark and validation. Results: The maximum differences of phase parameters, spot sigma, and divergence between MCDC algorithms are below 4.5 µm and 0.26 mrad in air, respectively. Comparing TPS to measurements at depths, both MC algorithms predict the spot sigma within 0.5 mm uncertainty intervals, the resolution of the measurement device. Beam Configuration in AcurosPT is found to underestimate number of protons per MU by~2.5% and requires user adjustment to match measured data, while RayStation is within 1% of measurements using Auto model. A solid water phantom was used to validate the range accuracy of non-water materials within 1% in AcurosPT. Conclusions: The proposed standardized commissioning framework can detect potential issues during PBS TPS MCDC commissioning processes, and potentially can shorten commissioning time and improve dosimetric accuracies. Secondary MCDC can be used to identify the root sources of disagreement between primary MCDC and measurement.
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high-and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan C 600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan C 600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization frame...
Target delineation for radiation therapy treatment planning often benefits from magnetic resonance imaging (MRI) in addition to x-ray computed tomography (CT) due to MRI's superior soft tissue contrast. MRI-based treatment planning could reduce systematic MR-CT co-registration errors, medical cost, radiation exposure, and simplify clinical workflow. However, MRI-only based treatment planning is not widely used to date because treatment-planning systems rely on the electron density information provided by CTs to calculate dose. Additionally, air and bone regions are difficult to separate given their similar intensities in MR imaging. The purpose of this work is to develop a learning-based method to generate patient-specific synthetic CT (sCT) from a routine anatomical MRI for use in MRI-only radiotherapy treatment planning. An auto-context model with patch-based anatomical features was integrated into a classification random forest to generate and improve semantic information. The semantic information along with anatomical features was then used to train a series of regression random forests based on the auto-context model. After training, the sCT of a new MRI can be generated by feeding anatomical features extracted from the MRI into the well-trained classification and regression random forests. The proposed algorithm was evaluated using 14 patient datasets withT1-weighted MR and corresponding CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) were 57.45 ± 8.45 HU, 28.33 ± 1.68 dB, and 0.97 ± 0.01. We also compared the difference between dose maps calculated on the sCT and those on the original CT, using the same plan parameters. The average DVH differences among all patients are less than 0.2 Gy for PTVs, and less than 0.02 Gy for OARs. The sCT generation by the proposed method allows for dose calculation based MR imaging alone, and may be a useful tool for MRI-based radiation treatment planning.
Purpose: For pencil-beam scanning proton therapy systems, in-air non-Gaussian halo can significantly impact output at small field sizes and low energies. Since the low-intensity tail of spot profile (halo) is not necessarily modeled in treatment planning systems (TPSs), this can potentially lead to significant differences in patient dose distribution. In this work, we report such impact for a Varian ProBeam system. Methods: We use a pair magnification technique to measure two-dimensional (2D) spot profiles of protons from 70 to 242 MeV at the treatment isocenter and 30 cm upstream of the isocenter. Measurements are made with both Gafchromic film and a scintillator detector coupled to a CCD camera (IBA Lynx). Spot profiles are measured down to 0.01% of their maximum intensity. Field size factors (FSFs) are compared among calculation using measured 2D profiles, calculation using a clinical treatment planning algorithm (Raystation 8A clinical Monte Carlo), and a CC04 small-volume ion chamber. FSFs were measured for square fields of proton energies ranging from 70 to 242 MeV.Results: All film and Lynx measurements agree within 1 mm for full width at half maximum beam intensity. The measured radial spot profiles disagree with simple Gaussian approximations, which are used for modeling in the TPS. FSF measurements show the magnitude of disagreements between beam output in reality and in the TPS without modeling halo. We found that the clinical TPS overestimated output by as much as 6% for small field sizes of 2 cm at the lowest energy of 70 MeV while the film and Lynx measurements agreed within 4% and 1%, respectively, for this FSF.Conclusions: If the in-air halo for low-energy proton beams is not fully modeled by the TPS, this could potentially lead to under-dosing small, shallow treatment volumes in PBS treatment plans.
Purpose Radiation dose to specific cardiac substructures, such as the atria and ventricles, has been linked to post‐treatment toxicity and has shown to be more predictive of these toxicities than dose to the whole heart. A deep learning‐based algorithm for automatic generation of these contours is proposed to aid in either retrospective or prospective dosimetric studies to better understand the relationship between radiation dose and toxicities. Methods The proposed method uses a mask‐scoring regional convolutional neural network (RCNN) which consists of five major subnetworks: backbone, regional proposal network (RPN), RCNN head, mask head, and mask‐scoring head. Multiscale feature maps are learned from computed tomography (CT) via the backbone network. The RPN utilizes these feature maps to detect the location and region‐of‐interest (ROI) of all substructures, and the final three subnetworks work in series to extract structural information from these ROIs. The network is trained using 55 patient CT datasets, with 22 patients having contrast scans. Threefold cross validation (CV) is used for evaluation on 45 datasets, and a separate cohort of 10 patients are used for holdout evaluation. The proposed method is compared to a 3D UNet. Results The proposed method produces contours that are qualitatively similar to the ground truth contours. Quantitatively, the proposed method achieved average Dice score coefficients (DSCs) for the whole heart, chambers, great vessels, coronary arteries, the valves of the heart of 0.96, 0.94, 0.93, 0.66, and 0.77 respectively, outperforming the 3D UNet, which achieved DSCs of 0.92, 0.87, 0.88, 0.48, and 0.59 for the corresponding substructure groups. Mean surface distances (MSDs) between substructures segmented by the proposed method and the ground truth were <2 mm except for the left anterior descending coronary artery and the mitral and tricuspid valves, and <5 mm for all substructures. When dividing results into noncontrast and contrast datasets, the model performed statistically significantly better in terms of DSC, MSD, centroid mean distance (CMD), and volume difference for the chambers and whole heart with contrast. Notably, the presence of contrast did not statistically significantly affect coronary artery segmentation DSC or MSD. After network training, all substructures and the whole heart can be segmented on new datasets in less than 5 s. Conclusions A deep learning network was trained for automatic delineation of cardiac substructures based on CT alone. The proposed method can be used as a tool to investigate the relationship between cardiac substructure dose and treatment toxicities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.