These data suggest that DDCNN can be used to segment the CTV and OARs accurately and efficiently. It was invariant to the body size, body shape, and age of the patients. DDCNN could improve the consistency of contouring and streamline radiotherapy workflows.
Purpose: To develop a method for predicting optimal dose distributions, given the planning image and segmented anatomy, by applying deep learning techniques to a database of previously optimized and approved Intensity-modulated radiation therapy treatment plans. Methods: Eighty cases of early-stage nasopharyngeal cancer (NPC) were included in the study. Seventy cases were chosen randomly as the training set and the remaining as the test set. The inputs were the images with structures, with each target and organs at risk (OARs) assigned a unique label. The outputs were dose maps, including coarse dose maps and converted fine dose maps (FDM) from convolution. Two types of input images with structures were used in the model building. One type of input included the images (with associated structures) without manipulation. The second type of input involved modifying the image gray label with information from radiation beam geometry. ResNet101 was chosen as the deep learning network for both. The accuracy of predicted dose distributions was evaluated against the corresponding dose as used in the clinic. A global three-dimensional gamma analysis was calculated for the evaluation.Results: The proposed model trained with the two different sets of input images and structures could both predict patient-specific dose distributions accurately. For the out-of-field dose distributions, the model obtained from the input with radiation geometry performed better (dose difference in %, 4.7 AE 6.1% vs 5.5 AE 7.9%, P < 0.05). The mean Gamma pass rates of dose distributions predicted with both types of input were comparable for most OARs (P > 0.05), except for the bilateral optic nerves and the optic chiasm. Conclusions: The proposed system with radiation geometry added to the input is a promising method to generate patient-specific dose distributions for radiotherapy. It can be applied to obtain the dose distributions slice-by-slice for planning quality assurance and for guiding automated planning.
BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a considerable amount of editing will be required.
A model that accurately predicts toxicities may be used to support clinical decisions for personalized treatment planning. An automated xerostomia prediction model was developed using a 3-dimensional residual convolutional neural network and demonstrated promising performance. This novel model uses computed tomography planning, 3-dimensional dose distributions, and contours as inputs and toxicity probability as output.Purpose: Xerostomia commonly occurs in patients who undergo head and neck radiation therapy and can seriously affect patients' quality of life. In this study, we developed a xerostomia prediction model with radiation treatment data using a 3-dimensional (3D) residual convolutional neural network (rCNN). The model can be used to guide radiation therapy to reduce toxicity. Methods and Materials: A total of 784 patients with head and neck squamous cell carcinoma enrolled in the Radiation Therapy Oncology Group 0522 clinical trial were included in this study. Late xerostomia is defined as xerostomia of grade !2 occurring in the 12th month of radiation therapy. The computed tomography (CT) planning images, 3D dose distributions, and contours of the parotid and submandibular glands were included as 3D rCNN inputs. Comparative experiments were performed for the 3D rCNN model without 1 of the 3 inputs and for the logistic regression model. Accuracy, sensitivity, specificity, F-score, and area under the receiver operator characteristic curve were evaluated. Results: The proposed model achieved promising prediction results. The performance metrics for 3D rCNN model with contour, CT images, and radiation therapy dose; 3D rCNN without contour; 3D rCNN without CT images; 3D rCNN without the dose; logistic regression with the dose and clinical parameters; and logistic regression without clinical parameters were as follows:
Purpose: Manual delineation of organs-at-risk (OARs) in radiotherapy is both time-consuming and subjective. Automated and more accurate segmentation is of the utmost importance in clinical application. The purpose of this study is to further improve the segmentation accuracy and efficiency with a novel network named Convolutional Neural Networks (CNN) Cascades. Methods: CNN Cascades was a two-step, coarse-to-fine approach that consisted of a Simple Region Detector (SRD) and a Fine Segmentation Unit (FSU). The SRD first used a relative shallow network to define the region of interest (ROI) where the organ was located, and then the FSU took the smaller ROI as input and adopted a deep network for fine segmentation. The imaging data (14,651 slices) of 100 head-and-neck patients with segmentations were used for this study. The performance was compared with the state-of-the-art single CNN in terms of accuracy with metrics of Dice similarity coefficient (DSC) and Hausdorff distance (HD) values. Results: The proposed CNN Cascades outperformed the single CNN on accuracy for each OAR. Similarly, for the average of all OARs, it was also the best with mean DSC of 0.90 (SRD: 0.86, FSU: 0.87, and U-Net: 0.85) and the mean HD of 3.0 mm (SRD: 4.0, FSU: 3.6, and U-Net: 4.4). Meanwhile, the CNN Cascades reduced the mean segmentation time per patient by 48% (FSU) and 5% (U-Net), respectively. Conclusions: The proposed two-step network demonstrated superior performance by reducing the input region. This potentially can be an effective segmentation method that provides accurate and consistent delineation with reduced clinician interventions for clinical applications as well as for quality assurance of a multi-center clinical trial.
Ensuring high-quality data for clinical trials in radiotherapy requires the generation of contours that comply with protocol definitions. The current workflow includes a manual review of the submitted contours, which is time-consuming and subjective. In this study, we developed an automated quality assurance (QA) system for lung cancer based on a segmentation model trained with deep active learning. Methods: The data included a gold atlas with 36 cases and 110 cases from the "NRG Oncology/RTOG 1308 Trial". The first 70 cases enrolled to the RTOG 1308 formed the candidate set, and the remaining 40 cases were randomly assigned to validation and test sets (each with 20 cases). The organs-at-risk included the heart, esophagus, spinal cord, and lungs. A preliminary convolutional neural network segmentation model was trained with the gold standard atlas. To address the deficiency of the limited training data, we selected quality images from the candidate set to be added to the training set for fine-tuning of the model with deep active learning. The trained robust segmentation models were used for QA purposes. The segmentation evaluation metrics derived from the validation set, including the Dice and Hausdorff distance, were used to develop the criteria for QA decision making. The performance of the strategy was assessed using the test set.
Purpose: More and more automatic segmentation tools are being introduced in routine clinical practice. However, physicians need to spend a considerable amount of time in examining the generated contours slice by slice. This greatly reduces the benefit of the tool's automaticity. In order to overcome this shortcoming, we developed an automatic quality assurance (QA) method for automatic segmentation using convolutional neural networks (CNNs). Materials and Methods: The study cohort comprised 680 patients with early-stage breast cancer who received whole breast radiation. The overall architecture of the automatic QA method for deep learning-based segmentation included the following two main parts: a segmentation CNN model and a QA network that was established based on ResNet-101. The inputs were from computed tomography, segmentation probability maps, and uncertainty maps. Two kinds of Dice similarity coefficient (DSC) outputs were tested. One predicted the DSC quality level of each slice ([0.95, 1] for "good," [0.8, 0.95] for "medium," and [0, 0.8] for "bad" quality), and the other predicted the DSC value of each slice directly. The performances of the method to predict the quality levels were evaluated with quantitative metrics: balanced accuracy, F score, and the area under the receiving operator characteristic curve (AUC). The mean absolute error (MAE) was used to evaluate the DSC value outputs. Results: The proposed methods involved two types of output, both of which achieved promising accuracy in terms of predicting the quality level. For the good, medium, and bad quality level prediction, the balanced accuracy was 0.97, 0.94, and 0.89, respectively; the F score was 0.98, 0.91, and 0.81, respectively; and the AUC was 0.96, 0.93, and 0.88, respectively. For the DSC value prediction, the MAE was 0.06 ± 0.19. The prediction time was approximately 2 s per patient. Conclusions: Our method could predict the segmentation quality automatically. It can provide useful information for physicians regarding further verification and revision of automatic contours. The integration of our method into current automatic segmentation pipelines can improve the efficiency of radiotherapy contouring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.