This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal; where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples; and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.
In this paper a new approach in human identification is introduced, in which the use of the Vectorcardiogram (VCG) main loop as a biometric feature is investigated. The advantage of using VCG over the Electrocardiogram (ECG) is that the shape of the VCG is independent of the heart rate. A test set of 550 VCG's recorded from 22 healthy individuals measured at different times and with a wide range of heart rate values is used to validate the system. The proposed system uses only the main loop of the VCG for identification. Two different algorithms are used. In the first one, coefficients from specially developed descriptor (the Equal Distance descriptor) are used for identification, and in the second, selected Fourier Descriptor coefficients of the main loop of the VCG are used as biometric data. In both methods Feed Forward Neural Networks are used as classifiers giving identification rates of 99.454% and 95% respectively.
Abstrucf -In practical vector quontitation (V# of images. the used pixel block dimensions are kept small to reduce the cost of computations. This in turn results in high& correlated blorkr and the corresponding YQ indices will inherit this high correlation. The compression of fhe basic VQ can be increased through utilising this high correlation of fndices by inserting a lossless index compression stage after the YQ stage. In this paper a new index compression algorithm is introduced In this dgorithm the 2 dimensional index map is divided info non overlapping square block. Index usage in each of these blocla is employed to remap (renumber) the reduced number of actually used indices in this block, thus resulting in reduced bit rate expressed in bits(uire1. The proposed algorithm reduces the average bit rate by a value depending on the codebook size, namely a reduction of about 32% for codebook size of 64, and doww to about 23% for codebook size of 1024. Moreover this algorithm lend itseyto being cascaded by other index compression algorithms resulting in increased compression.
No abstract
Automated diagnosis and Troubleshooting (TS) in Radio Access Networks (RAN) of cellular systems are basic management tasks, which are required to guarantee efficient use of network resources. In this paper, we investigate the usage of machine learning techniques: stochastic methods and discriminant analysis for automating these TS tasks. Our proposed framework is based on Hidden Markov Model (HMM), Principle Component Analysis (PCA) and Fisher Linear Discriminant (FLD) techniques. In a learning phase, symptoms relating to faults in the network are extracted from a network management system (NMS). Then they are used to create a fault model. This model is used to identify the unknown faults using a nearest neighbor classifier. Reported results for the automated diagnosis using live RAN measurements illustrate the efficiency of the proposed TS framework and its importance to mobile network operators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.