2022
DOI: 10.3390/jcm11113013
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays

Abstract: The rapid spread of COVID-19 across the globe since its emergence has pushed many countries’ healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 40 publications
1
10
0
Order By: Relevance
“…As can be seen in Table 2 the accuracy of the model is 99.3%, precision is 99.0%, sensitivity is 99.5%, specificity is 99.0%, and F1-score% is 99.2%. For the chest X-ray dataset with 3 categories, we used the methods of Li (2021) [50], Shi (2021) [51], Mondal (2021) [40], Chetoui (2022) [38], and Pan (2023) [52] for comparison with our work. As can be seen in Table 2 the accuracy of the model is 96.8%, precision is 97.8%, sensitivity is 97.1%, specificity is 98.9%, and F1-score% is 97.4%.…”
Section: Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…As can be seen in Table 2 the accuracy of the model is 99.3%, precision is 99.0%, sensitivity is 99.5%, specificity is 99.0%, and F1-score% is 99.2%. For the chest X-ray dataset with 3 categories, we used the methods of Li (2021) [50], Shi (2021) [51], Mondal (2021) [40], Chetoui (2022) [38], and Pan (2023) [52] for comparison with our work. As can be seen in Table 2 the accuracy of the model is 96.8%, precision is 97.8%, sensitivity is 97.1%, specificity is 98.9%, and F1-score% is 97.4%.…”
Section: Results and Analysismentioning
confidence: 99%
“…For medical images, Komorowski et al ( 2023) investigated categories interpretation methods on the ViT and evaluated these methods based on loyalty, sensitivity, and complexity appropriately [37]. Chetoui et al (2022) [38] and Ukwuoma et al ( 2022) [39] are committed to using ViTs for COVID-19 detection and have made some basic interpretations for the results. XViT-COS [40] has been proposed for COVID-19 detection and offered explainability-driven with clinically interpretable visualizations.…”
Section: Vits For Covid-19 Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, the implementation of this method in a high-volume diagnostic setting may be self-limiting; that is, the speed of validating the results depends on the availability of a radiologist and the volume of images to be reviewed [ 4 , 5 , 6 , 9 , 10 , 11 ]. Thus, the automatic detection of lung disease by AI is currently a highly valued and frequently evaluated concept in the fields of medical informatics research and radiology [ 4 , 12 ]. Several studies are already available.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we propose to perform the PCa aggressiveness classification task from T2w images by exploiting an ensemble of vision transformers(ViTs) [ 24 ]. ViTs are becoming increasingly popular in the medical imaging domain [ 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ], usually outperforming classical CNNs [ 35 , 36 ], which are one of the most significant networks in the deep learning field [ 37 ]. The existing literature typically employs ViTs in transfer learning scenarios by pre-training them on large datasets of natural images and fine-tuning them on specific datasets [ 27 , 28 , 38 ].…”
Section: Introductionmentioning
confidence: 99%