One of the primary treatment options for head and neck cancer is (chemo)radiation. Accurate delineation of the contour of the tumors is of great importance in the successful treatment of the tumor and in the prediction of patient outcomes. With this paper we take part in the HECKTOR 2021 challenge and we propose our methods for automatic tumor segmentation on PET and CT images of oropharyngeal cancer patients. To achieve this goal, we investigated different deep learning methods with the purpose of highlighting relevant image and modality related features, to refine the contour of the primary tumor. More specifically, we tested a Co-learning method [1] and a 3D Skip Spatial and Channel Squeeze and Excitation Multi-Scale Attention method (Skip-scSE-M), on the challenge dataset. The best results achieved on the test set were 0.762 mean Dice Similarity Score and 3.143 median of the Hausdorf Distance at 95%.
Long-term survival of oropharyngeal squamous cell carcinoma patients (OPSCC) is quite poor. Accurate prediction of Progression Free Survival (PFS) before treatment could make identification of high-risk patients before treatment feasible which makes it possible to intensify or de-intensify treatments for high-or low-risk patients. In this work, we proposed a deep learning based pipeline for PFS prediction. The proposed pipeline consists of three parts. Firstly, we utilize the pyramid autoencoder for image feature extraction from both CT and PET scans. Secondly, the feed forward feature selection method is used to remove the redundant features from the extracted image features as well as clinical features. Finally, we feed all selected features to a DeepSurv model for survival analysis that outputs the risk score on PFS on individual patients. The whole pipeline was trained on 224 OPSCC patients. We have achieved a average C-index of 0.7806 and 0.7967 on the independent validation set for task 2 and task 3. The C-indices achieved on the test set are 0.6445 and 0.6373, respectively. It is demonstrated that our proposed approach has the potential for PFS prediction and possibly for other survival endpoints.
Aim: The development and evaluation of deep learning (DL) and radiomics based models for recurrence-free survival (RFS) prediction in oropharyngeal squamous cell carcinoma (OPSCC) patients based on clinical features, positron emission tomography (PET) and computed tomography (CT) scans and GTV (Gross Tumor Volume) contours of primary tumors and pathological lymph nodes.Methods: A DL auto-segmentation algorithm generated the GTV contours (task 1) that were used for imaging biomarkers (IBMs) extraction and as input for the DL model. Multivariable cox regression analysis was used to develop radiomics models based on clinical and IBMs features. Clinical features with a significant correlation with the endpoint in a univariable analysis were selected. The most promising IBMs were selected by forward selection in 1000 times bootstrap resampling in five-fold cross validation. To optimize the DL models, different combinations of clinical features, PET/CT imaging, GTV contours, the selected radiomics features and the radiomics model predictions were used as input. The combination with the best average performance in five-fold cross validation was taken as the final input for the DL model. The final prediction in the test set, was an ensemble average of the predictions from the five models for the different folds.Results: The average C-index in the five-fold cross validation of the radiomics model and the DL model were 0.7069 and 0.7575, respectively. The radiomics and final DL models showed C-indexes of 0.6683 and 0.6455, respectively in the test set.Conclusion: The radiomics model for recurrence free survival prediction based on clinical, GTV and CT image features showed the best predictive performance in the test set with a C-index of 0.6683.B. Ma and Y. Li-These authors contributed equally.
Overlapping phenotypic features between Early Onset Ataxia (EOA) and Developmental Coordination Disorder (DCD) can complicate the clinical distinction of these disorders. Clinical rating scales are a common way to quantify movement disorders but in children these scales also rely on the observer’s assessment and interpretation. Despite the introduction of inertial measurement units for objective and more precise evaluation, special hardware is still required, restricting their widespread application. Gait video recordings of movement disorder patients are frequently captured in routine clinical settings, but there is presently no suitable quantitative analysis method for these recordings. Owing to advancements in computer vision technology, deep learning pose estimation techniques may soon be ready for convenient and low-cost clinical usage. This study presents a framework based on 2D video recording in the coronal plane and pose estimation for the quantitative assessment of gait in movement disorders. To allow the calculation of distance-based features, seven different methods to normalize 2D skeleton keypoint data derived from pose estimation using deep neural networks applied to freehand video recording of gait were evaluated. In our experiments, 15 children (five EOA, five DCD and five healthy controls) were asked to walk naturally while being videotaped by a single camera in 1280 × 720 resolution at 25 frames per second. The high likelihood of the prediction of keypoint locations (mean = 0.889, standard deviation = 0.02) demonstrates the potential for distance-based features derived from routine video recordings to assist in the clinical evaluation of movement in EOA and DCD. By comparison of mean absolute angle error and mean variance of distance, the normalization methods using the Euclidean (2D) distance of left shoulder and right hip, or the average distance from left shoulder to right hip and from right shoulder to left hip were found to better perform for deriving distance-based features and further quantitative assessment of movement disorders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.