2020
DOI: 10.48550/arxiv.2006.02570
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images

Abstract: The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosis of infected patients. Medical imaging such as X-ray and Computed Tomography (CT) combined with the potential of Artificial Intelligence (AI) plays an essential role in supporting the medical staff in the diagnosis process. Thereby, five different deep learning models (ResNet18, ResNet34, InceptionV3, Incep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…This paper presents the TorchEsegeta framework, which integrates various interpretability and explainability techniques available in different libraries and extends these techniques for segmentation models. It is noteworthy that the development of this pipeline started with the exploration of various interpretability techniques for classifying COVID-19 and other types of pneumonia [10]. An initial pipeline was developed under that project for classification models but only for 2D images.…”
Section: Methodsmentioning
confidence: 99%
“…This paper presents the TorchEsegeta framework, which integrates various interpretability and explainability techniques available in different libraries and extends these techniques for segmentation models. It is noteworthy that the development of this pipeline started with the exploration of various interpretability techniques for classifying COVID-19 and other types of pneumonia [10]. An initial pipeline was developed under that project for classification models but only for 2D images.…”
Section: Methodsmentioning
confidence: 99%
“…This model's network layers have the most information flowing through them, which makes it easier to extract the best characteristics. Yes Yes [29] Yes Yes [30] Yes [31] Yes [32] Yes [33] Yes Yes Yes [34] Yes Yes [35] Yes Yes Yes [36] Yes Yes [24] Yes Yes Yes [25] Yes Yes Yes [37] Yes [38] Yes Yes Yes [39] Yes Yes [39] Yes Yes [40] Yes Yes [41] Yes Yes [42] Yes Yes…”
Section: Data Segmentationmentioning
confidence: 99%
“…One of the main contributions of this research work is the TorchEsegeta framework which integrates various interpretability and explainability techniques available in different libraries and extends these techniques for segmentation models. It is noteworthy that the development of this pipeline started with the exploration of various interpretability techniques for classifying COVID-19 and other types of pneumoniae (Chatterjee et al, 2020b). An initial pipeline was developed under that project for classification models, but only for 2D images.…”
Section: Architecture Of Torchesegetamentioning
confidence: 99%