The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944,
Rationale: Some patients with coronavirus disease 2019 (COVID-19) rapidly develop respiratory failure or even die, underscoring the need for early identification of patients at elevated risk of severe illness. This study aims to quantify pneumonia lesions by computed tomography (CT) in the early days to predict progression to severe illness in a cohort of COVID-19 patients. Methods: This retrospective cohort study included confirmed COVID-19 patients. Three quantitative CT features of pneumonia lesions were automatically calculated using artificial intelligence algorithms, representing the percentages of ground-glass opacity volume (PGV), semi-consolidation volume (PSV), and consolidation volume (PCV) in both lungs. CT features, acute physiology and chronic health evaluation II (APACHE-II) score, neutrophil-to-lymphocyte ratio (NLR), and d-dimer, on day 0 (hospital admission) and day 4, were collected to predict the occurrence of severe illness within a 28-day follow-up using both logistic regression and Cox proportional hazard models. Results: We included 134 patients, of whom 19 (14.2%) developed any severe illness. CT features on day 0 and day 4, as well as their changes from day 0 to day 4, showed predictive capability. Changes in CT features from day 0 to day 4 performed the best in the prediction (area under the receiver operating characteristic curve = 0.93, 95% confidence interval [CI] 0.87~0.99; C-index=0.88, 95% CI 0.81~0.95). The hazard ratios of PGV and PCV were 1.39 (95% CI 1.05~1.84, P=0.023) and 1.67 (95% CI 1.17~2.38, P=0.005), respectively. CT features, adjusted for age and gender, on day 4 and in terms of changes from day 0 to day 4 outperformed APACHE-II, NLR, and d-dimer. Conclusions: CT quantification of pneumonia lesions can early and non-invasively predict the progression to severe illness, providing a promising prognostic indicator for clinical management of COVID-19.
Objective: CT provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study is to develop a deep learning (DL) based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. Methods: The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each 8 9
The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.
Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the highlevel representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves
Background: To evaluate the diagnostic efficacy of Densely Connected Convolutional Networks (DenseNet) for detection of COVID-19 features on high resolution computed tomography (HRCT). Methods:The Ethic Committee of our institution approved the protocol of this study and waived the requirement for patient informed consent. Two hundreds and ninety-five patients were enrolled in this study (healthy person: 149; COVID-19 patients: 146), which were divided into three separate non-overlapping cohorts (training set, n=135, healthy person, n=69, patients, n=66; validation set, n=20, healthy person, n=10, patients, n=10; test set, n=140, healthy person, n=70, patients, n=70). The DenseNet was trained and tested to classify the images as having manifestation of COVID-19 or as healthy. A radiologist also blindly evaluated all the test images and rechecked the misdiagnosed cases by DenseNet. Receiver operating characteristic curves (ROC) and areas under the curve (AUCs) were used to assess the model performance. The sensitivity, specificity and accuracy of DenseNet model and radiologist were also calculated. Results:The DenseNet algorithm model yielded an AUC of 0.99 (95% CI: 0.958-1.0) in the validation set and 0.98 (95% CI: 0.972-0.995) in the test set. The threshold value was selected as 0.8, while for validation and test sets, the accuracies were 95% and 92%, the sensitivities were 100% and 97%, the specificities were 90% and 87%, and the F1 values were 95% and 93%, respectively. The sensitivity of radiologist was 94%, the specificity was 96%, while the accuracy was 95%. Conclusions:Deep learning (DL) with DenseNet can accurately classify COVID-19 on HRCT with an AUC of 0.98, which can reduce the miss diagnosis rate (combined with radiologists' evaluation) and radiologists' workload.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.