Novel corona virus COVID‐19 has spread rapidly all over the world. Due to increasing COVID‐19 cases, there is a dearth of testing kits. Therefore, there is a severe need for an automatic recognition system as a solution to reduce the spreading of the COVID‐19 virus. This work offers a decision support system based on the X‐ray image to diagnose the presence of the COVID‐19 virus. A deep learning‐based computer‐aided decision support system will be capable to differentiate between COVID‐19 and pneumonia. Recently, convolutional neural network (CNN) is designed for the diagnosis of COVID‐19 patients through chest radiography (or chest X‐ray , CXR) images. However, due to the usage of CNN, there are some limitations with these decision support systems. These systems suffer with the problem of view‐invariance and loss of information due to down‐sampling. In this paper, the capsule network (CapsNet)‐based system named visual geometry group capsule network (VGG‐CapsNet) for the diagnosis of COVID‐19 is proposed. Due to the usage of capsule network (CapsNet), the authors have succeeded in removing the drawbacks found in the CNN‐based decision support system for the detection of COVID‐19. Through simulation results, it is found that VGG‐CapsNet has performed better than the CNN‐CapsNet model for the diagnosis of COVID‐19. The proposed VGG‐CapsNet‐based system has shown 97% accuracy for COVID‐19 versus non‐COVID‐19 classification, and 92% accuracy for COVID‐19 versus normal versus viral pneumonia classification. Proposed VGG‐CapsNet‐based system available at https://github.com/shamiktiwari/COVID19_Xray can be used to detect the existence of COVID‐19 virus in the human body through chest radiographic images.
The goal of image restoration is to improve a given image in some predefined sense. Restoration attempts to recover an image by modelling the degradation function and applying the inverse process. Motion blur is a common type of degradation which is caused by the relative motion between an object and camera. Motion blur can be modeled by a point spread function consists of two parameters angle and length. Accurate estimation of these parameters is required in case of blind restoration of motion blurred images. This paper compares different approaches to estimate the parameters of a motion blur namely direction and length directly from the observed image with and without the influence of Gaussian noise. These estimated motion blur parameters can then be used in a standard nonblind deconvolution algorithm. Simulation results compare the performance of most common motion blur estimation methods. Index Terms-motion blur, hough transform, radon transform, Cepstral transform I.
A phonocardiogram (PCG) signal represents sounds and murmurs made by vibrations caused during a cardiac cycle. Acoustic wave generated through the beat of the cardiac cycle propagates through the chest wall. It can be easily recorded by a low-cost small handheld digital device called a stethoscope. It provides information like heart rate, intensity, tone, quality, frequency, and location of various components of cardiac sound. Due to these characteristics, phonocardiogram signals can be used to detect heart status at an early stage in a non-invasive manner. In previous studies, the convolutional neural network (ConvNet) is the most studied architecture, which was fed by three main features, namely Mel frequency cepstral (MFC), chroma energy normalized statistics (CENS), and constant-Q transform (CQT). In this paper, the authors have presented a hybrid constant-Q transform (HCQT) based CNN system for heart sound beat classification. CQT, variable-Q transform (VQT), and HCQT are extracted from each phonocardiogram signal as the acoustic features, including the dominant MFCC features, feed into five-layer regularized ConvNets. After analyzing the literature in the same domain, it can be stated that this is the first time HCQT is being utilized for PCG signals. Experimental results have shown that HCQT is more effective relative to the conventional CQT and other investigated features. Also, the accuracies of the system proposed in this work on the validation datasets are 96% in multi-class classification, which outperforms the proposed work relative to other models significantly.
Abstract-Blur is an undesirable phenomenon which appears as image degradation. Blur classification is extremely desirable before application of any blur parameters estimation approach in case of blind restoration of barcode image. A novel approach to classify blur in motion, defocus, and co-existence of both blur categories is presented in this paper. The key idea involves statistical features extraction of blur pattern in frequency domain and designing of blur classification system with feed forward neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.