2022
DOI: 10.1016/j.compbiomed.2021.105182
|View full text |Cite
|
Sign up to set email alerts
|

Fully automatic pipeline of convolutional neural networks and capsule networks to distinguish COVID-19 from community-acquired pneumonia via CT images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 59 publications
(64 reference statements)
0
12
0
Order By: Relevance
“… DenseNet201+GradCam 2 98.80% 11 Rahimzadeh et al (2021) ( 15 ) 63,849 CT images. ResNet50V2+ FPN 2 98.49% 12 Qi et al (2022) ( 16 ) 10,000 CT images. UNet+ DenseNet121 2 97.10% 13 Abdel-Basset et al (2021) ( 17 ) 9593 CT images.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“… DenseNet201+GradCam 2 98.80% 11 Rahimzadeh et al (2021) ( 15 ) 63,849 CT images. ResNet50V2+ FPN 2 98.49% 12 Qi et al (2022) ( 16 ) 10,000 CT images. UNet+ DenseNet121 2 97.10% 13 Abdel-Basset et al (2021) ( 17 ) 9593 CT images.…”
Section: Discussionmentioning
confidence: 99%
“…Qi et al first used five models of UNet, LinkNet, R2UNet, Attention UNet, and UNet++ to segment CT images, and then used pretrained DenseNet121, InceptionV3 and ResNet50 for classification, using a total of Over 10,000 CT images. The results show that in the binary classification of COVID-19 and CAO, LinkNet performs best in lung segmentation with a Dice coefficient of 0.9830, while DenseNet121 with capsule network has a prediction accuracy of 97.10% ( 16 ).…”
Section: Introductionmentioning
confidence: 99%
“…The second experiment attempted to determine whether the MIP images are useful. Following our previous study [ 13 ], all intrapulmonary slices were directly fed into the capsule network and the slice-level predictions were output. Majority voting was then utilized to produce the final patient predictions.…”
Section: Methodsmentioning
confidence: 99%
“…Three-dimensional (3D) volume analysis and two-dimensional (2D) slice analysis have been combined to provide CT images for the classification of COVID-19 and CAP [ 11 ]. Qi et al [ 12 ] proposed a multiple-instance learning method to distinguish COVID-19 from CAP, while Qi and his colleagues [ 13 ] developed a fully automatic deep-learning pipeline that can accurately distinguish COVID-19 from CAP using CT images by mimicking the diagnostic process of radiologists. On the basis of the above methods, more robust and advanced deep-learning models should be developed to improve the diagnosis of COVID-19.…”
Section: Introductionmentioning
confidence: 99%
“…Apart from U-Net in [25] , a SegNet-based attention gated (AG) mechanism guided by Dice Loss (DL) and Tversky Loss (TL) is proposed to identify the region of interests. A Linknet architecuture is deployed in [26] where the concatenation operation of U-Net is replaced by adding a defying loss of spatial information. With the extracted segmented regions, at the classification stage, various supervised machine learning schemes are commonly utilized [15] , [16] , [17] .…”
Section: Introductionmentioning
confidence: 99%