DeepCOVID-XR, an artificial intelligence algorithm for detecting COVID-19 on chest radiographs, demonstrated performance similar to the consensus of experienced thoracic radiologists. Key Results: • DeepCOVID-XR classified 2,214 test images (1,194 COVID-19 positive) with an accuracy of 83% and AUC of 0.90 compared with the reference standard of RT-PCR. • On 300 random test images (134 COVID-19 positive), DeepCOVID-XR's accuracy was 82% (AUC 0.88) compared to 5 individual thoracic radiologists (accuracy 76%-81%) and the consensus of all 5 radiologists (accuracy 81%, AUC 0.85). • Using the consensus interpretation of the radiologists as the reference standard, DeepCOVID-XR's AUC was 0.95. Abbreviations: Coronavirus Disease 2019 (COVID-19), real time polymerase chain reaction (RT-PCR), artificial intelligence (AI), area under the curve (AUC), receiver operating characteristic (ROC), convolutional neural network (CNN) See also the editorial by van Ginneken.
X-ray fluorescence spectroscopy (XRF) plays an important role for elemental analysis in a wide range of scientific fields, especially in cultural heritage. XRF imaging, which uses a raster scan to...
Background: To develop a deep learning (DL) method based on multiphase, contrast-enhanced (CE) magnetic resonance imaging (MRI) to distinguish Liver Imaging Reporting and Data System (LI-RADS) grade 3 (LR-3) liver tumors from combined higher-grades 4 and 5 (LR-4/LR-5) tumors for hepatocellular carcinoma (HCC) diagnosis.Methods: A total of 89 untreated LI-RADS-graded liver tumors (35 LR-3, 14 LR-4, and 40 LR-5) were identified based on the radiology MRI interpretation reports. Multiphase 3D T1-weighted gradient echo imaging was acquired at six time points: pre-contrast, four phases immediately post-contrast, and one hepatobiliary phase after intravenous injection of gadoxetate disodium. Image co-registration was performed across all phases on the center tumor slice to correct motion. A rectangular tumor box centered on the tumor area was drawn to extract subset tumor images for each imaging phase, which were used as the inputs to a convolutional neural network (CNN). The pre-trained AlexNet CNN model underwent transfer learning using liver MRI data for LI-RADS tumor grade classification. The output probability number closer to 1 or 0 indicated a higher possibility of being combined LR-4/LR-5 tumor or LR-3 tumor, respectively. Five-fold cross validation was used for training (60% dataset), validation (20%) and testing processes (20%).Results: The DL CNN model for LI-RADS grading using inputs of multiphase liver MRI data acquired at three time points (pre-contrast, arterial, and washout phase) achieved a high accuracy of 0.90, sensitivity of 1.0, precision of 0.835, and AUC of 0.95 with reference to the expert human radiologist report. The CNN output of probability provided radiologists a confidence level of the model's grading for each liver lesion.Conclusions: An AlexNet CNN model for LI-RADS grading of liver lesions provided diagnostic performance comparable to radiologists and offered valuable clinical guidance for differentiating intermediate LR-3 liver lesions from more-likely malignant LR-4/LR-5 lesions in HCC diagnosis.
Brain structure is tightly coupled with brain functions, but it remains unclear how cognition is related to brain morphology, and what is consistent across neurodevelopment. In this work, we developed graph convolutional neural networks (gCNNs) to predict Fluid Intelligence (Gf) from shapes of cortical ribbons and subcortical structures. T1-weighted MRIs from two independent cohorts, the Human Connectome Project (HCP; age: 28.81 ± 3.70) and the Adolescent Brain Cognitive Development Study (ABCD; age: 9.93 ± 0.62) were independently analyzed. Cortical and subcortical surfaces were extracted and modeled as surface meshes. Three gCNNs were trained and evaluated using six-fold nested cross-validation. Overall, combining cortical and subcortical surfaces yielded the best predictions on both HCP (R=0.454) and ABCD datasets (R=0.314), and outperformed the current literature. Across both datasets, the morphometry of the amygdala and hippocampus, along with temporal, parietal and cingulate cortex consistently drove the prediction of Gf, suggesting a novel reframing of the morphometry underlying Gf.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.