Obstructive sleep apnea (OSA) is the most common and severe breathing dysfunction which frequently freezes the breathing for longer than 10[Formula: see text]s while sleeping. Polysomnography (PSG) is the conventional approach concerning the treatment of OSA detection. But, this approach is a costly and cumbersome process. To overcome the above complication, a satisfactory and novel technique for interpretation of sleep apnea using ECG were recording is under development. The methods for OSA analysis based on ECG were analyzed for numerous years. Early work concentrated on extracting features, which depend entirely on the experience of human specialists. A novel approach for the prediction of sleep apnea disorder based on the convolutional neural network (CNN) using a pre-trained (AlexNet) model is analyzed in this study. After filtering per-minute segment of the single-lead ECG recording accompanied by continuous wavelet transform (CWT), the 2D scalogram images are generated. Finally, CNN based on deep learning algorithm is adopted to enhance the classification performance. The efficiency of the proposed model is compared with the previous methods that used the same datasets. Proposed method based on CNN is able to achieve the accuracy of 86.22% with 90% sensitivity in per-minute segment OSA classification. Based on per-recording OSA diagnosis, our works correctly classify all the abnormal apneic recording with 100% accuracy. Our OSA analysis model using time-frequency scalogram generates excellent independent validation performance with different state-of-the-art OSA classification systems. Experimental results proved that the proposed method produces excellent performance outcomes with low cost and less complexity.
Due to low physical workout, high-calorie intake, and bad behavioral character, people were affected by cardiological disorders. Every instant, one out of four deaths are due to heart-related ailments. Hence, the early diagnosis of a heart is essential. Most of the approaches for automated classification of the heart sound need segmentation of Phonocardiograms (PCG) signal. The main aim of this study was to decline the segmentation process and to estimate the utility for accurate and detailed classification of short unsegmented PCG recording. Based on wavelet decomposition, Hilbert transform, homomorphic filtering, and power spectral density (PSD), the features had been obtained using the beginning 5 second PCG recording. The extracted features were classified using nearest neighbors with Euclidean distances for different values of [Formula: see text] by bootstrapping 50% PCG recording for training and 50% for testing over 100 iterations. The overall accuracy of 100%, 85%, 80.95%, 81.4%, and 98.13% had been achieved for five different datasets using KNN classifiers. The classification performance for analyzing the whole datasets is 90% accuracy with 93% sensitivity and 90% specificity. The classification of unsegmented PCG recording based on an efficient feature extraction is necessary. This paper presents a promising classification performance as compared with the state-of-the-art approaches in short time less complexity.
These days, with technological advancement, it is very easy for miscreants to produce illegal multimedia data copies. Various techniques of copyright protection of free data are being developed daily. Digital watermarking is one such technique, where digital embedding of the copyright information/watermark into the data to be protected. The two major ways of doing so are spatial domain and the robust transform domain. In this study, method for watermarking of digital images, with biometric data is presented. The usage of biometric instead of the traditional watermark increases the security of the image data. The biometric used here is iris. After the retinal scan, it is the most unique biometric. In terms of user friendliness in extracting the biometric, it comes after fingerprint and facial scan. The iris biometric template is generated from subject's eye images. The discrete cosine values of templates are extracted through discrete cosine transform and converted to binary code. This binary code is embedded in the singular values of the host image's coefficients generated through wavelet transform. The original image is thus firstly applied with the discrete wavelet transform followed up by the singular value decomposition of the subband coefficients. The algorithm has been tested with popular attacks for analysis of false recognition and rejection of subjects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.