Objectives The purpose of this study was to build a deep learning model to derive labels from neuroradiology reports and assign these to the corresponding examinations, overcoming a bottleneck to computer vision model development. Methods Reference-standard labels were generated by a team of neuroradiologists for model training and evaluation. Three thousand examinations were labelled for the presence or absence of any abnormality by manually scrutinising the corresponding radiology reports (‘reference-standard report labels’); a subset of these examinations (n = 250) were assigned ‘reference-standard image labels’ by interrogating the actual images. Separately, 2000 reports were labelled for the presence or absence of 7 specialised categories of abnormality (acute stroke, mass, atrophy, vascular abnormality, small vessel disease, white matter inflammation, encephalomalacia), with a subset of these examinations (n = 700) also assigned reference-standard image labels. A deep learning model was trained using labelled reports and validated in two ways: comparing predicted labels to (i) reference-standard report labels and (ii) reference-standard image labels. The area under the receiver operating characteristic curve (AUC-ROC) was used to quantify model performance. Accuracy, sensitivity, specificity, and F1 score were also calculated. Results Accurate classification (AUC-ROC > 0.95) was achieved for all categories when tested against reference-standard report labels. A drop in performance (ΔAUC-ROC > 0.02) was seen for three categories (atrophy, encephalomalacia, vascular) when tested against reference-standard image labels, highlighting discrepancies in the original reports. Once trained, the model assigned labels to 121,556 examinations in under 30 min. Conclusions Our model accurately classifies head MRI examinations, enabling automated dataset labelling for downstream computer vision applications. Key Points • Deep learning is poised to revolutionise image recognition tasks in radiology; however, a barrier to clinical adoption is the difficulty of obtaining large labelled datasets for model training. • We demonstrate a deep learning model which can derive labels from neuroradiology reports and assign these to the corresponding examinations at scale, facilitating the development of downstream computer vision models. • We rigorously tested our model by comparing labels predicted on the basis of neuroradiology reports with two sets of reference-standard labels: (1) labels derived by manually scrutinising each radiology report and (2) labels derived by interrogating the actual images.
ObjectiveAutomatic segmentation of vestibular schwannoma (VS) from routine clinical MRI can improve clinical workflow, facilitate treatment decisions, and assist patient management. Previously, excellent automatic segmentation results were achieved on datasets of standardised MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms. Here, we show that automatic segmentation of VS on such datasets is also possible with high accuracy.MethodsWe acquired a large multi-centre routine clinical (MC-RC) dataset of 168 patients with a single sporadic VS who were referred from 10 medical sites and consecutively seen at a single centre. Up to three longitudinal MRI exams were selected for each patient. Selection rules based on image modality, resolution orientation, and acquisition timepoint were defined to automatically select contrast-enhanced T1-weighted (ceT1w) images (n=130) and T2-weighted images (n=379). Manual ground truth segmentations were obtained in an iterative process in which segmentations were: 1) produced or amended by a specialized company; and 2) reviewed by one of three trained radiologists; and 3) validated by an expert team. Inter- and intra-observer reliability was assessed on a subset of 10 ceT1w and 41 T2w images. The MC-RC dataset was split randomly into 3 nonoverlapping sets for model training, hyperparameter-tuning and testing in proportions 70/10/20%. We applied deep learning to train our VS segmentation model, based on convolutional neural networks (CNN) within the nnU-Net framework.ResultsOur model achieved excellent Dice scores when evaluated on the MC-RC testing set as well as the public testing set. On the MC-RC testing set, Dice scores were 90.8±4.5% for ceT1w, 86.1±11.6% for T2w and 82.3±18.4% for a combined ceT1w+T2w input.ConclusionsWe developed a model for automatic VS segmentation on diverse multi-centre clinical datasets. The results show that the performance of the framework is comparable to that of human annotators. In contrast, a model trained a publicly available dataset acquired for Gamma Knife stereotactic radiosurgery did not perform well on the MC-RC testing set. The application of our model has the potential to greatly facilitate the management of patients in clinical practice. Our pre-trained segmentation models are made available online. Moreover, we are in the process of making the MC-RC dataset publicly available.
The growing demand for head magnetic resonance imaging (MRI) examinations, along with a global shortage of radiologists, has led to an increase in the time taken to report head MRI scans around the world. For many neurological conditions, this delay can result in increased morbidity and mortality. An automated triaging tool could reduce reporting times for abnormal examinations by identifying abnormalities at the time of imaging and prioritizing the reporting of these scans. In this work, we present a convolutional neural network for detecting clinically-relevant abnormalities in T 2 -weighted head MRI scans. Using a validated neuroradiology report classifier, we generated a labelled dataset of 43,754 scans from two large UK hospitals for model training, and demonstrate accurate classification (area under the receiver operating curve (AUC) = 0.943) on a test set of 800 scans labelled by a team of neuroradiologists. Importantly, when trained on scans from only a single hospital the model generalized to scans from the other hospital (∆AUC ≤ 0.02). A simulation study demonstrated that our model would reduce the mean reporting time for abnormal examinations from 28 days to 14 days and from 9 days to 5 days at the two hospitals, demonstrating feasibility for use in a clinical triage environment.
Background The End-of-Life Care (EoLC) Strategy 2008 highlighted the importance of early identification of potential for dying and Advanced Care Planning (ACP).1 2 3 Early awareness of poor prognosis helps patients and their relatives to understand their illness and make informed choices about their care; for instance whether to decline aggressive intervention and hospitalisation if it is unlikely to improve their quality of life. Aims and Objectives To prompt early communication within the MDT and with patients and their relatives about ACP for those deemed to be in their last year of life. Method Data from 40 discharged patients, who had been identified as likely being in their last year of life, was collected in each audit cycle.Cycle 1 was a retrospective case notes analysis from one Care-of-the-Eldery (CoE) ward. Five key areas of ACP were reviewed: resuscitation status, prognosis, ceiling of care, readmission plans, patient/family awareness of condition and prognosis.Prior to cycle 2, an ACP Summary proforma was implemented where discussions around the five key areas could be summarised. It was introduced at a CoE department meeting and teaching sessions were held for junior doctors.Results Tables 1 and 2 illustrate our findings.
Background Central nervous system (CNS) lymphomas are a rare subset of lymphoma, which are associated with a poor outcome. The gold standard for CNS imaging is with gadolinium-enhanced magnetic resonance imaging (MRI); however, there are a number of limitations, including some patients with small persistent abnormalities from scarring due to focal haemorrhage or from a previous biopsy, which can be difficult to discern from residual tumour. [18F]Fluoromethylcholine positron emission tomography–computed tomography (FCH-PET/CT) uses an analogue of choline, which due to the upregulation of choline kinase in tumour cells, allows increased uptake of FCH. As there is minimal background grey matter uptake of FCH, FCH-PET/CT can be used in CNS imaging and provide a useful tool for response assessment. Methods This is a cohort study, where we identified 40 patients with a diagnosis of primary or secondary CNS lymphoma between 1st November 2011 and 10th October 2019. Results 26 of the 40 patients (65%) had concordant results. Of the discordant results, 11 out of 14 had partial response (PR) on MRI but showed a metabolic complete response (mCR) on FCH-PET. The overall response rates (ORR) were similar between the two modalities (90% for MRI versus 95% with FCT-PET/CT). Conclusion We conclude that FCH-PET/CT is a reasonable alternative mode of imaging to gadolinium-enhanced MRI brain imaging, providing a new tool for assessment of CNS lymphoma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.