Recently, the whole world became infected by the newly discovered coronavirus (COVID-19). SARS-CoV-2, or widely known as COVID-19, has proved to be a hazardous virus severely affecting the health of people. It causes respiratory illness, especially in people who already suffer from other diseases. Limited availability of test kits as well as symptoms similar to other diseases such as pneumonia has made this disease deadly, claiming the lives of millions of people. Artificial intelligence models are found to be very successful in the diagnosis of various diseases in the biomedical field In this paper, an integrated stacked deep convolution network InstaCovNet-19 is proposed. The proposed model makes use of various pre-trained models such as ResNet101, Xception, InceptionV3, MobileNet, and NASNet to compensate for a relatively small amount of training data. The proposed model detects COVID-19 and pneumonia by identifying the abnormalities caused by such diseases in Chest X-ray images of the person infected. The proposed model achieves an accuracy of 99.08% on 3 class (COVID-19, Pneumonia, Normal) classification while achieving an accuracy of 99.53% on 2 class (COVID, NON-COVID) classification. The proposed model achieves an average recall, F1 score, and precision of 99%, 99%, and 99%, respectively on ternary classification, while achieving a 100% precision and a recall of 99% on the binary class., while achieving a 100% precision and a recall of 99% on the COVID class. InstaCovNet-19’s ability to detect COVID-19 without any human intervention at an economical cost with high accuracy can benefit humankind greatly in this age of Quarantine.
Recent decades have witnessed rapid development in the field of medical image segmentation. Deep learning-based fully convolution neural networks have played a significant role in the development of automated medical image segmentation models. Though immensely effective, such networks only take into account localized features and are unable to capitalize on the global context of medical image. In this paper, two deep learning based models have been proposed namely USegTransformer-P and USegTransformer-S. The proposed models capitalize upon local features and global features by amalgamating the transformer-based encoders and convolution-based encoders to segment medical images with high precision. Both the proposed models deliver promising results, performing better than the previous state of the art models in various segmentation tasks such as Brain tumor, Lung nodules, Skin lesion and Nuclei segmentation. The authors believe that the ability of USegTransformer-P and USegTransformer-S to perform segmentation with high precision could remarkably benefit medical practitioners and radiologists around the world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.