ObjectivesTo investigate the potential of deep learning in assessing pneumoconiosis depicted on digital chest radiographs and to compare its performance with certified radiologists.MethodsWe retrospectively collected a dataset consisting of 1881 chest X-ray images in the form of digital radiography. These images were acquired in a screening setting on subjects who had a history of working in an environment that exposed them to harmful dust. Among these subjects, 923 were diagnosed with pneumoconiosis, and 958 were normal. To identify the subjects with pneumoconiosis, we applied a classical deep convolutional neural network (CNN) called Inception-V3 to these image sets and validated the classification performance of the trained models using the area under the receiver operating characteristic curve (AUC). In addition, we asked two certified radiologists to independently interpret the images in the testing dataset and compared their performance with the computerised scheme.ResultsThe Inception-V3 CNN architecture, which was trained on the combination of the three image sets, achieved an AUC of 0.878 (95% CI 0.811 to 0.946). The performance of the two radiologists in terms of AUC was 0.668 (95% CI 0.555 to 0.782) and 0.772 (95% CI 0.677 to 0.866), respectively. The agreement between the two readers was moderate (kappa: 0.423, p<0.001).ConclusionOur experimental results demonstrated that the deep leaning solution could achieve a relatively better performance in classification as compared with other models and the certified radiologists, suggesting the feasibility of deep learning techniques in screening pneumoconiosis.
Objective To define the uniqueness of chest CT infiltrative features associated with COVID-19 image characteristics as potential diagnostic biomarkers. Methods We retrospectively collected chest CT exams including n = 498 on 151 unique patients RT-PCR positive for COVID-19 and n = 497 unique patients with community-acquired pneumonia (CAP). Both COVID-19 and CAP image sets were partitioned into three groups for training, validation, and testing respectively. In an attempt to discriminate COVID-19 from CAP, we developed several classifiers based on three-dimensional (3D) convolutional neural networks (CNNs). We also asked two experienced radiologists to visually interpret the testing set and discriminate COVID-19 from CAP. The classification performance of the computer algorithms and the radiologists was assessed using the receiver operating characteristic (ROC) analysis, and the nonparametric approaches with multiplicity adjustments when necessary. Results One of the considered models showed non-trivial, but moderate diagnostic ability overall (AUC of 0.70 with 99% CI 0.56-0.85). This model allowed for the identification of 8-50% of CAP patients with only 2% of COVID-19 patients. Conclusions Professional or automated interpretation of CT exams has a moderately low ability to distinguish between COVID-19 and CAP cases. However, the automated image analysis is promising for targeted decision-making due to being able to accurately identify a sizable subsect of non-COVID-19 cases. Key Points • Both human experts and artificial intelligent models were used to classify the CT scans. • ROC analysis and the nonparametric approaches were used to analyze the performance of the radiologists and computer algorithms. • Unique image features or patterns may not exist for reliably distinguishing all COVID-19 from CAP; however, there may be imaging markers that can identify a sizable subset of non-COVID-19 cases.
Objective To develop and validate a novel deep learning architecture to classify retinal vein occlusion (RVO) on color fundus photographs (CFPs) and reveal the image features contributing to the classification. Methods The neural understanding network (NUN) is formed by two components: (1) convolutional neural network (CNN)‐based feature extraction and (2) graph neural networks (GNN)‐based feature understanding. The CNN‐based image features were transformed into a graph representation to encode and visualize long‐range feature interactions to identify the image regions that significantly contributed to the classification decision. A total of 7062 CFPs were classified into three categories: (1) no vein occlusion (“normal”), (2) central RVO, and (3) branch RVO. The area under the receiver operative characteristic (ROC) curve (AUC) was used as the metric to assess the performance of the trained classification models. Results The AUC, accuracy, sensitivity, and specificity for NUN to classify CFPs as normal, central occlusion, or branch occlusion were 0.975 (± 0.003), 0.911 (± 0.007), 0.983 (± 0.010), and 0.803 (± 0.005), respectively, which outperformed available classical CNN models. Conclusion The NUN architecture can provide a better classification performance and a straightforward visualization of the results compared to CNNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.