Background: Human papillomavirus (HPV)-positive oropharyngeal squamous cell carcinoma (OPSCC) have better prognosis and treatment response compared to HPV-negative OPSCC. This study aims to noninvasively predict HPV status of OPSCC using clinical and/or radiological variables. Methods: Seventy-seven magnetic resonance radiomic features were extracted from T1-weighted postcontrast images of the primary tumor of 153 patients. Logistic regression models were created to predict HPV status, determined with immunohistochemistry, based on clinical variables, radiomic features, and its combination. Model performance was evaluated using area under the curve (AUC). Results: Model performance showed AUCs of 0.794, 0.764, and 0.871 for the clinical, radiomic, and combined models, respectively. Smoking, higher T-classification (T3 and T4), larger, less round, and heterogeneous tumors were associated with HPV-negative tumors. Conclusion: Models based on clinical variables and/or radiomic tumor features can predict HPV status in OPSCC patients with good performance and can be considered when HPV testing is not available.
Background and purpose: Segmentation of oropharyngeal squamous cell carcinoma (OPSCC) is needed for radiotherapy planning. We aimed to segment the primary tumor for OPSCC on MRI using convolutional neural networks (CNNs). We investigated the effect of multiple MRI sequences as input and we proposed a semiautomatic approach for tumor segmentation that is expected to save time in the clinic. Materials and methods: We included 171 OPSCC patients retrospectively from 2010 until 2015. For all patients the following MRI sequences were available: T1-weighted, T2-weighted and 3D T1-weighted after gadolinium injection. We trained a 3D UNet using the entire images and images with reduced context, considering only information within clipboxes around the tumor. We compared the performance using different combinations of MRI sequences as input. Finally, a semi-automatic approach by two human observers defining clipboxes around the tumor was tested. Segmentation performance was measured with Sørensen-Dice coefficient (Dice), 95th Hausdorff distance (HD) and Mean Surface Distance (MSD). Results: The 3D UNet trained with full context and all sequences as input yielded a median Dice of 0.55, HD of 8.7 mm and MSD of 2.7 mm. Combining all MRI sequences was better than using single sequences. The semiautomatic approach with all sequences as input yielded significantly better performance (p < 0.001): a median Dice of 0.74, HD of 4.6 mm and MSD of 1.2 mm.
Conclusion:Reducing the amount of context around the tumor and combining multiple MRI sequences improved the segmentation performance. A semi-automatic approach was accurate and clinically feasible.
Recent advances in real-time magnetic resonance imaging (rtMRI) of the vocal tract provides opportunities for studying human speech. This modality together with acquired speech may enable the mapping of articulatory configurations to acoustic features. In this study, we take the first step by training a deep learning model to classify 27 different phonemes from midsagittal MR images of the vocal tract. An American English database was used to train a convolutional neural network for classifying vowels (13 classes), consonants (14 classes) and all phonemes (27 classes) of 17 subjects. Classification top-1 accuracy of the test set for all phonemes was 57%. Error analysis showed voiced and unvoiced sounds often being confused. Moreover, we performed principal component analysis on the network's embedding and observed topological similarities between the network learned representation and the vowel diagram. Saliency maps gave insight into the anatomical regions most important for classification and show congruence with known regions of articulatory importance. We demonstrate the feasibility for deep learning to distinguish between phonemes from MRI. Network analysis can be used to improve understanding of normal articulation and speech and, in the future, impaired speech. This study brings us a step closer to the articulatory-to-acoustic mapping from rtMRI.
To assess how gross tumour volume (GTV) delineation in anal cancer is affected by interobserver variations between radiologists and radiation oncologists, expertise level, and use of T2weighted MRI (T2W-MRI) vs. diffusion-weighted imaging (DWI), and to explore effects of DWI quality. Methods and materials: We retrospectively analyzed the MRIs (T2W-MRI and b800-DWI) of 25 anal cancer patients. Four readers (Senior and Junior Radiologist; Senior and Junior Radiation Oncologist) independently delineated GTVs, first on T2W-MRI only and then on DWI (with reference to T2W-MRI). Maximum Tumour Diameter (MTD) was calculated from each GTV. Mean GTVs/MTDs were compared between readers and between T2W-MRI vs. DWI. Interobserver agreement was calculated as Intraclass Correlation Coefficient (ICC), Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). DWI image quality was assessed using a 5-point artefact scale. Results: Interobserver agreement between radiologists vs. radiation oncologists and between junior vs. senior readers was good-excellent, with similar agreement for T2W-MRI and DWI (e.g. ICCs 0.72-0.94 for T2W-MRI and 0.68-0.89 for DWI). There was a trend towards smaller GTVs on DWI, but only for the radiologists (P = 0.03-0.07). Moderate-severe DWI-artefacts were observed in 11/25 (44%) cases. Agreement tended to be lower in these cases. Conclusion: Overall interobserver agreement for anal cancer GTV delineation on MRI is good for both radiologists and radiation oncologists, regardless of experience level. Use of DWI did not improve agreement. DWI artefacts affecting GTV delineation occurred in almost half of the patients, which may severely limit the use of DWI for radiotherapy planning if no steps are undertaken to avoid them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.