The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19–positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Bai and Thomasian in this issue.
Background Recently, artificial intelligence (AI)-based applications for chest imaging have emerged as potential tools to assist clinicians in the diagnosis and management of patients with coronavirus disease 2019 (COVID-19). Objectives To develop a deep learning-based clinical decision support system for automatic diagnosis of COVID-19 on chest CT scans. Secondarily, to develop a complementary segmentation tool to assess the extent of lung involvement and measure disease severity. Methods The Imaging COVID-19 AI initiative was formed to conduct a retrospective multicentre cohort study including 20 institutions from seven different European countries. Patients with suspected or known COVID-19 who underwent a chest CT were included. The dataset was split on the institution-level to allow external evaluation. Data annotation was performed by 34 radiologists/radiology residents and included quality control measures. A multi-class classification model was created using a custom 3D convolutional neural network. For the segmentation task, a UNET-like architecture with a backbone Residual Network (ResNet-34) was selected. Results A total of 2,802 CT scans were included (2,667 unique patients, mean [standard deviation] age = 64.6 [16.2] years, male/female ratio 1.3:1). The distribution of classes (COVID-19/Other type of pulmonary infection/No imaging signs of infection) was 1,490 (53.2%), 402 (14.3%), and 910 (32.5%), respectively. On the external test dataset, the diagnostic multiclassification model yielded high micro-average and macro-average AUC values (0.93 and 0.91, respectively). The model provided the likelihood of COVID-19 vs other cases with a sensitivity of 87% and a specificity of 94%. The segmentation performance was moderate with Dice similarity coefficient (DSC) of 0.59. An imaging analysis pipeline was developed that returned a quantitative report to the user. Conclusion We developed a deep learning-based clinical decision support system that could become an efficient concurrent reading tool to assist clinicians, utilising a newly created European dataset including more than 2,800 CT scans.
Purpose Surveillance of patients with high-grade glioma (HGG) and identification of disease progression remain a major challenge in neurooncology. This study aimed to develop a support vector machine (SVM) classifier, employing combined longitudinal structural and perfusion MRI studies, to classify between stable disease, pseudoprogression and progressive disease (3-class problem). Methods Study participants were separated into two groups: group I (total cohort: 64 patients) with a single DSC time point and group II (19 patients) with longitudinal DSC time points (2-3). We retrospectively analysed 269 structural MRI and 92 dynamic susceptibility contrast perfusion (DSC) MRI scans. The SVM classifier was trained using all available MRI studies for each group. Classification accuracy was assessed for different feature dataset and time point combinations and compared to radiologists’ classifications. Results SVM classification based on combined perfusion and structural features outperformed radiologists’ classification across all groups. For the identification of progressive disease, use of combined features and longitudinal DSC time points improved classification performance (lowest error rate 1.6%). Optimal performance was observed in group II (multiple time points) with SVM sensitivity/specificity/accuracy of 100/91.67/94.7% (first time point analysis) and 85.71/100/94.7% (longitudinal analysis), compared to 60/78/68% and 70/90/84.2% for the respective radiologist classifications. In group I (single time point), the SVM classifier also outperformed radiologists’ classifications with sensitivity/specificity/accuracy of 86.49/75.00/81.53% (SVM) compared to 75.7/68.9/73.84% (radiologists). Conclusion Our results indicate that utilisation of a machine learning (SVM) classifier based on analysis of longitudinal perfusion time points and combined structural and perfusion features significantly enhances classification outcome (p value= 0.0001).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.