To detect COVID-19 patients more accurately and more precisely, we proposed a novel artificial intelligence model. (Methods) We used previously proposed chest CT dataset containing four categories: COVID-19, community-acquired pneumonia, secondary pulmonary tuberculosis, and healthy subjects. First, we proposed a novel VGG-style base network (VSBN) as backbone network. Second, convolutional block attention module (CBAM) was introduced as attention module into our VSBN. Third, an improved multiple-way data augmentation method was used to resist overfitting of our AI model. In all, our model was dubbed as a 12-layer attention-based VGG-style network for COVID-19 (AVNC) (Results) This proposed AVNC achieved the sensitivity/precision/F1 per class all above 95%. Particularly, AVNC yielded a micro-averaged F1 score of 96.87%, which is higher than 11 state-of-the-art approaches.(Conclusion) This proposed AVNC is effective in recognizing COVID-19 diseases.
Aim) To make a more accurate and precise COVID-19 diagnosis system, this study proposed a novel deep rank-based average pooling network (DRAPNet) model, i.e., deep rank-based average pooling network, for COVID-19 recognition. (Methods) 521 subjects yield 1164 slice images via the slice level selection method. All the 1164 slice images comprise four categories: COVID-19 positive; community-acquired pneumonia; second pulmonary tuberculosis; and healthy control. Our method firstly introduced an improved multiple-way data augmentation. Secondly, an n-conv rankbased average pooling module (NRAPM) was proposed in which rank-based pooling-particularly, rank-based average pooling (RAP)-was employed to avoid overfitting. Third, a novel DRAPNet was proposed based on NRAPM and inspired by the VGG network. Grad-CAM was used to generate heatmaps and gave our AI model an explainable analysis. (Results) Our DRAPNet achieved a micro-averaged F1 score of 95.49% by 10 runs over the test set. The sensitivities of the four classes were 95.44%, 96.07%, 94.41%, and 96.07%, respectively. The precisions of four classes were 96.45%, 95.22%, 95.05%, and 95.28%, respectively. The F1 scores of the four classes were 95.94%, 95.64%, 94.73%, and 95.67%, respectively. Besides, the confusion matrix was given. (Conclusions) The DRAPNet is effective in diagnosing COVID-19 and other chest infectious diseases. The RAP gives better results than four other methods: strided convolution, l 2 -norm pooling, average pooling, and max pooling.
COVID‐19 pneumonia started in December 2019 and caused large casualties and huge economic losses. In this study, we intended to develop a computer‐aided diagnosis system based on artificial intelligence to automatically identify the COVID‐19 in chest computed tomography images. We utilized transfer learning to obtain the image‐level representation (ILR) based on the backbone deep convolutional neural network. Then, a novel neighboring aware representation (NAR) was proposed to exploit the neighboring relationships between the ILR vectors. To obtain the neighboring information in the feature space of the ILRs, an ILR graph was generated based on the k ‐nearest neighbors algorithm, in which the ILRs were linked with their k ‐nearest neighboring ILRs. Afterward, the NARs were computed by the fusion of the ILRs and the graph. On the basis of this representation, a novel end‐to‐end COVID‐19 classification architecture called neighboring aware graph neural network (NAGNN) was proposed. The private and public data sets were used for evaluation in the experiments. Results revealed that our NAGNN outperformed all the 10 state‐of‐the‐art methods in terms of generalization ability. Therefore, the proposed NAGNN is effective in detecting COVID‐19, which can be used in clinical diagnosis.
Aim) COVID-19 is an ongoing infectious disease. It has caused more than 107.45 m confirmed cases and 2.35 m deaths till 11/Feb/2021. Traditional computer vision methods have achieved promising results on the automatic smart diagnosis. (Method) This study aims to propose a novel deep learning method that can obtain better performance. We use the pseudo-Zernike moment (PZM), derived from Zernike moment, as the extracted features. Two settings are introducing: (i) image plane over unit circle; and (ii) image plane inside the unit circle. Afterward, we use a deep-stacked sparse autoencoder (DSSAE) as the classifier. Besides, multiple-way data augmentation is chosen to overcome overfitting. The multiple-way data augmentation is based on Gaussian noise, salt-and-pepper noise, speckle noise, horizontal and vertical shear, rotation, Gamma correction, random translation and scaling. (Results) 10 runs of 10-fold cross validation shows that our PZM-DSSAE method achieves a sensitivity of 92.06% ± 1.54%, a specificity of 92.56% ± 1.06%, a precision of 92.53% ± 1.03%, and an accuracy of 92.31% ± 1.08%. Its F1 score, MCC, and FMI arrive at 92.29% ±1.10%, 84.64% ± 2.15%, and 92.29% ± 1.10%, respectively. The AUC of our model is 0.9576. (Conclusion) We demonstrate "image plane over unit circle" can get better results than "image plane inside a unit circle." Besides, this proposed PZM-DSSAE model is better than eight state-of-the-art approaches.
Aim) The COVID-19 has caused 6.26 million deaths and 522.06 million confirmed cases till 17/May/2022. Chest computed tomography is a precise way to help clinicians diagnose COVID-19 patients. (Method) Two datasets are chosen for this study. The multiple-way data augmentation, including speckle noise, random translation, scaling, salt-and-pepper noise, vertical shear, Gamma correction, rotation, Gaussian noise, and horizontal shear, is harnessed to increase the size of the training set. Then, the SqueezeNet (SN) with complex bypass is used to generate SN features. Finally, the extreme learning machine (ELM) is used to serve as the classifier due to its simplicity of usage, quick learning speed, and great generalization performances. The number of hidden neurons in ELM is set to 2000. Ten runs of 10-fold cross-validation are implemented to generate impartial results. (Result) For the 296-image dataset, our SNELM model attains a sensitivity of 96.35 ± 1.50%, a specificity of 96.08 ± 1.05%, a precision of 96.10 ± 1.00%, and an accuracy of 96.22 ± 0.94%. For the 640-image dataset, the SNELM attains a sensitivity of 96.00 ± 1.25%, a specificity of 96.28 ± 1.16%, a precision of 96.28 ± 1.13%, and an accuracy of 96.14 ± 0.96%. (Conclusion) The proposed SNELM model is successful in diagnosing COVID-19. The performances of our model are higher than seven state-of-the-art COVID-19 recognition models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.