Cancers arising from the oropharynx have become increasingly more studied in the past few years, as they are now epidemic domestically. These tumors are treated with definitive (chemo)radiotherapy, and have local recurrence as a primary mode of clinical failure. Recent data suggest that ‘radiomics’, or extraction of image texture analysis to generate mineable quantitative data from medical images, can reflect phenotypes for various cancers. Several groups have shown that developed radiomic signatures, in head and neck cancers, can be correlated with survival outcomes. This data descriptor defines a repository for head and neck radiomic challenges, executed via a Kaggle in Class platform, in partnership with the MICCAI society 2016 annual meeting.These public challenges were designed to leverage radiomics and/or machine learning workflows to discriminate HPV phenotype in one challenge (HPV status challenge) and to identify patients who will develop a local recurrence in the primary tumor volume in the second one (Local recurrence prediction challenge) in a segmented, clinically curated anonymized oropharyngeal cancer (OPC) data set.
Purpose:We sought to automate R.E.N.A.L. (for radius, exophytic/endophytic, nearness of tumor to collecting system, anterior/posterior, location relative to polar line) nephrometry scoring of preoperative computerized tomography scans and create an artificial intelligence-generated score (AI-score). Subsequently, we aimed to evaluate its ability to predict meaningful oncologic and perioperative outcomes as compared to expert human-generated nephrometry scores (H-scores).Materials and Methods:A total of 300 patients with preoperative computerized tomography were identified from a cohort of 544 consecutive patients undergoing surgical extirpation for suspected renal cancer at a single institution. A deep neural network approach was used to automatically segment kidneys and tumors, and geometric algorithms were developed to estimate components of R.E.N.A.L. nephrometry score. Tumors were independently scored by medical personnel blinded to AI-scores. AI- and H-score agreement was assessed using Lin’s concordance correlation and their predictive abilities for both oncologic and perioperative outcomes were assessed using areas under the curve.Results:Median age was 60 years (IQE 51–68), and 40% were female. Median tumor size was 4.2 cm and 91.3% had malignant tumors, including 27%, 37% and 24% with high stage, grade and necrosis, respectively. There was significant agreement between H-scores and AI-scores (Lin’s ⍴=0.59). Both AI- and H-scores similarly predicted meaningful oncologic outcomes (p <0.001) including presence of malignancy, necrosis, and high-grade and -stage disease (p <0.003). They also predicted surgical approach (p <0.004) and specific perioperative outcomes (p <0.05).Conclusions:Fully automated AI-generated R.E.N.A.L. scores are comparable to human-generated R.E.N.A.L. scores and predict a wide variety of meaningful patient-centered outcomes. This unambiguous artificial intelligence-based scoring is intended to facilitate wider adoption of the R.E.N.A.L. score.
Background: Resected oral cavity carcinoma defects are often reconstructed with osteocutaneous or soft-tissue free flaps, but risk of osteoradionecrosis (ORN) is unknown. Methods: This retrospective study included oral cavity carcinoma treated with free-tissue reconstruction and postoperative IMRT between 2000 and 2019.Risk-regression assessed risk factors for grade ≥2 ORN. Results: One hundred fifty-five patients (51% male, 28% current smokers, mean age 62 ± 11 years) were included. Median follow-up was 32.6 months (range, 1.0-190.6). Thirty-eight (25%) patients had fibular free flap for mandibular reconstruction, whereas 117 (76%) had soft-tissue reconstruction.Grade ≥2 ORN occurred in 14 (9.0%) patients, at a median 9.8 months (range, 2.4-61.5) after IMRT. Post-radiation teeth extraction was significantly associated with ORN. One-year and 10-year ORN rates were 5.2% and 10%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.