The goal of this study is to demonstrate the feasibility of a novel fully-convolutional volumetric dose prediction neural network (DoseNet) and test its performance on a cohort of prostate stereotactic body radiotherapy (SBRT) patients. DoseNet is suggested as a superior alternative to U-Net and fully connected distance map-based neural networks for non-coplanar SBRT prostate dose prediction. DoseNet utilizes 3D convolutional downsampling with corresponding 3D deconvolutional upsampling to preserve memory while simultaneously increasing the receptive field of the network. DoseNet was implemented on 2 Nvidia 1080 Ti graphics processing units and utilizes a 3 phase learning protocol to help achieve convergence and improve generalization. DoseNet was trained, validated, and tested with 151 patients following Kaggle completion rules. The dosimetric quality of DoseNet was evaluated by comparing the predicted dose distribution with the clinically approved delivered dose distribution in terms of conformity index, heterogeneity index, and various clinically relevant dosimetric parameters. The results indicate that the DoseNet algorithm is a superior alternative to U-Net and fully connected methods for prostate SBRT patients. DoseNet required ~50.1 h to train, and ~0.83 s to make a prediction on a 128 × 128 × 64 voxel image. In conclusion, DoseNet is capable of making accurate volumetric dose predictions for non-coplanar SBRT prostate patients, while simultaneously preserving computational efficiency.
Convolutional neural networks (CNNs) with transfer learning can predict IMRT QA passing rates by automatically designing features from the fluence maps without human expert supervision. Predictions from CNNs are comparable to a system carefully designed by physicist experts.
The adoption of enterprise digital imaging, along with the development of quantitative imaging methods and the re-emergence of statistical learning, has opened the opportunity for more personalized cancer treatments through transformative data science research. In the last 5 years, accumulating evidence has indicated that noninvasive advanced imaging analytics (i.e., radiomics) can reveal key components of tumor phenotype for multiple lesions at multiple time points over the course of treatment. Many groups using homegrown software have extracted engineered and deep quantitative features on 3-dimensional medical images for better spatial and longitudinal understanding of tumor biology and for the prediction of diverse outcomes. These developments could augment patient stratification and prognostication, buttressing emerging targeted therapeutic approaches. Unfortunately, the rapid growth in popularity of this immature scientific discipline has resulted in many early publications that miss key information or use underpowered patient data sets, without production of generalizable results. Quantitative imaging research is complex, and key principles should be followed to realize its full potential. The fields of quantitative imaging and radiomics in particular require a renewed focus on optimal study design and reporting practices, standardization, interpretability, data sharing, and clinical trials. Standardization of image acquisition, feature calculation, and statistical analysis (i.e., machine learning) are required for the field to move forward. A new data-sharing paradigm enacted among open and diverse participants (medical institutions, vendors and associations) should be embraced for faster development and comprehensive clinical validation of imaging biomarkers. In this review and critique of the field, we propose working principles and fundamental changes to the current scientific approach, with the goal of high-impact research and development of actionable prediction models that will yield more meaningful applications of precision cancer medicine.
Deep learning algorithms have recently been developed that utilize patient anatomy and raw imaging information to predict radiation dose, as a means to increase treatment planning efficiency and improve radiotherapy plan quality. Current state-of-the-art techniques rely on convolutional neural networks (CNNs) that use pixel-to-pixel loss to update network parameters. However, stereotactic body radiotherapy (SBRT) dose is often heterogeneous, making it difficult to model using pixel-level loss. Generative adversarial networks (GANs) utilize adversarial learning that incorporates image-level loss and is better suited to learn from heterogeneous labels. However, GANs are difficult to train and rely on compromised architectures to facilitate convergence. This study suggests an attention-gated generative adversarial network (DoseGAN) to improve learning, increase model complexity, and reduce network redundancy by focusing on relevant anatomy. DoseGAN was compared to alternative state-of-the-art dose prediction algorithms using heterogeneity index, conformity index, and various dosimetric parameters. All algorithms were trained, validated, and tested using 141 prostate SBRT patients. DoseGAN was able to predict more realistic volumetric dosimetry compared to all other algorithms and achieved statistically significant improvement compared to all alternative algorithms for the V 100 and V 120 of the PTV, V 60 of the rectum, and heterogeneity index. Advanced treatment techniques such as intensity modulated radiation therapy (IMRT) and volumetrically modulated arc therapy (VMAT) have become standard of care for many treatment sites 1,2. Creating clinically acceptable treatment plans using these advanced techniques requires extensive domain expertise and is exceedingly time consuming 3,4. To reduce the burden on clinical resources, the development of automated treatment planning technologies has accelerated in recent years 5-10. Historically, automated treatment planning technologies relied on selecting handcrafted features, such as spatial relationships between planning volumes, overlapping volume histograms, planning volume shapes, planning volume and field intersections, field shapes, planning volume depths , and distance-to-target histograms (DTH) 11-14. These techniques rely on machine learning algorithms such as gradient boosting, random forests, and support vector machines to find strong correlations between groups of weakly correlated predictive features 6,15-17. Such techniques achieve good performance on inherently structured data, but tend to struggle if the problem does not easily reduce to a structured format. Because of this, deep learning approaches have emerged that predict dose using fully connected layers 18. However, fully connected layers tend to not generalize well on highly dimensional data. Convolutional neural networks (CNNs) have emerged to solve many image processing tasks 4,6,19-23. Recently, encoder-decoder CNNs have been used to predict radiation dose from arbitrary patient anatomy. These method...
The purpose of the work is to develop a deep unsupervised learning strategy for cone-beam CT (CBCT) to CT deformable image registration (DIR). This technique uses a deep convolutional inverse graphics network (DCIGN) based DIR algorithm implemented on 2 Nvidia 1080 Ti graphics processing units. The model is comprised of an encoding and decoding stage. The fully-convolutional encoding stage learns hierarchical features and simultaneously forms an information bottleneck, while the decoding stage restores the original dimensionality of the input image. Activations from the encoding stage are used as the input channels to a sparse DIR algorithm. DCIGN was trained using a distributive learning-based convolutional neural network architecture and used 285 head and neck patients to train, validate, and test the algorithm. The accuracy of the DCIGN algorithm was evaluated on 100 synthetic cases and 12 hold out test patient cases. The results indicate that DCIGN performed better than rigid registration, intensity corrected Demons, and landmark-guided deformable image registration for all evaluation metrics. DCIGN required ~14 h to train, and ~3.5 s to make a prediction on a 512 × 512 × 120 voxel image. In conclusion, DCIGN is able to maintain high accuracy in the presence of CBCT noise contamination, while simultaneously preserving high computational efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.