In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.
Transmission of high volume of data in a restricted wireless sensor network (WSN) has come up as a challenge due to high-energy consumption and larger bandwidth requirement. To address the issues of high-energy consumption and efficient data transmission adaptive block compressive sensing (ABCS) is one of the optimum solution. ABCS framework is well capable to adapt the sampling rate depending on the block’s features information that offers higher sampling rate for less compressible blocks and lower sampling rate for more compressible blocks In this paper, we have proposed a novel fuzzy rule based adaptive compressive sensing approach by leveraging the saliency and the edge features of the image making the sampling rate selection completely automatic. Adaptivity of the block sampling ratio has been decided based on the fuzzy logic system (FLS) by considering two important features i.e., edge and saliency information. The proposed framework is experimented on standard dataset, Kodak data set, CCTV images and the Set5 data set images. It achieved an average PSNR of 34.26 and 33.2 and an average SSIM of 0.87 and 0.865 for standard images and CCTV images respectively. Again for high resolution Kodak data set and Set 5 dataset images, it achieved an average PSNR of 32.95 and 31.72 and SSIM of 0.832 and 0.8 respectively. The experiments and the result analysis show that proposed method is efficacious than the state of the art methods in both subjective and objective evaluation metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.