Purpose Previously, the evaluation of xerostomia depended on subjective grading systems, rather than the accurate saliva amount reduction. Our aim was to quantify acute xerostomia with reduced saliva amount, and apply radiomics, dose-volume histogram (DVH) criteria and clinical features to predict saliva amount reduction by machine learning techniques. Material and methods Computed tomography (CT) of parotid glands, DVH, and clinical data of 52 patients were collected to extract radiomics, DVH criteria and clinical features, respectively. Firstly, radiomics, DVH criteria and clinical features were divided into 3 groups for feature selection, in order to alleviate the masking effect of the number of features in different groups. Secondly, the top features in the 3 groups composed integrated features, and features selection was performed again for integrated features. In this study, feature selection was used as a combination of eXtreme Gradient Boosting (XGBoost) and SHapley Additive exPlanations (SHAP) to alleviate multicollinearity. Finally, 6 machine learning techniques were used for predicting saliva amount reduction. Meanwhile, top radiomics features were modeled using the same machine learning techniques for comparison. Result 17 integrated features (10 radiomics, 4 clinical, 3 DVH criteria) were selected to predict saliva amount reduction, with a mean square error (MSE) of 0.6994 and a R2 score of 0.9815. Top 17 and 10 selected radiomics features predicted saliva amount reduction, with MSE of 0.7376, 0.7519, and R2 score of 0.9805, 0.9801, respectively. Conclusion With the same number of features, integrated features (radiomics + DVH criteria + clinical) performed better than radiomics features alone. The important DVH criteria and clinical features mainly included, white blood cells (WBC), parotid_glands_Dmax, Age, parotid_glands_V15, hemoglobin (Hb), BMI and parotid_glands_V45.
Purpose To create a network which fully utilizes multi‐sequence MRI and compares favorably with manual human contouring. Methods We retrospectively collected 89 MRI studies of the pelvic cavity from patients with prostate cancer and cervical cancer. The dataset contained 89 samples from 87 patients with a total of 84 valid samples. MRI was performed with T1‐weighted (T1), T2‐weighted (T2), and Enhanced Dixon T1‐weighted (T1DIXONC) sequences. There were two cohorts. The training cohort contained 55 samples and the testing cohort contained 29 samples. The MRI images in the training cohort contained contouring data from radiotherapist α. The MRI images in the testing cohort contained contouring data from radiotherapist α and contouring data from another radiotherapist: radiotherapist β. The training cohort was used to optimize the convolution neural networks, which included the attention mechanism through the proposed activation module and the blended module into multiple MRI sequences, to perform autodelineation. The testing cohort was used to assess the networks’ autodelineation performance. The contoured organs at risk (OAR) were the anal canal, bladder, rectum, femoral head (L), and femoral head (R). Results We compared our proposed network with UNet and FuseUNet using our dataset. When T1 was the main sequence, we input three sequences to segment five organs and evaluated the results using four metrics: the DSC (Dice similarity coefficient), the JSC (Jaccard similarity coefficient), the ASD (average mean distance), and the 95% HD (robust Hausdorff distance). The proposed network achieved improved results compared with the baselines among all metrics. The DSC were 0.834±0.029, 0.818±0.037, and 0.808±0.050 for our proposed network, FuseUNet, and UNet, respectively. The 95% HD were 7.256±2.748 mm, 8.404±3.297 mm, and 8.951±4.798 mm for our proposed network, FuseUNet, and UNet, respectively. Our proposed network also had superior performance on the JSC and ASD coefficients. Conclusion Our proposed activation module and blended module significantly improved the performance of FuseUNet for multi‐sequence MRI segmentation. Our proposed network integrated multiple MRI sequences efficiently and autosegmented OAR rapidly and accurately. We also discovered that three‐sequence fusion (T1‐T1DIXONC‐T2) was superior to two‐sequence fusion (T1‐T2 and T1‐T1DIXONC, respectively). We infer that the more MRI sequences fused, the better the automatic segmentation results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.