This work presents a comparison of different approaches for the detection of murmurs from phonocardiographic signals. Taking into account the variability of the phonocardiographic signals induced by valve disorders, three families of features were analyzed: (a) time-varying & time-frequency features; (b) perceptual; and (c) fractal features. With the aim of improving the performance of the system, the accuracy of the system was tested using several combinations of the aforementioned families of parameters. In the second stage, the main components extracted from each family were combined together with the goal of improving the accuracy of the system. The contribution of each family of features extracted was evaluated by means of a simple k-nearest neighbors classifier, showing that fractal features provide the best accuracy (97.17%), followed by time-varying & time-frequency (95.28%), and perceptual features (88.7%). However, an accuracy around 94% can be reached just by using the two main features of the fractal family; therefore, considering the difficulties related to the automatic intrabeat segmentation needed for spectral and perceptual features, this scheme becomes an interesting alternative. The conclusion is that fractal type features were the most robust family of parameters (in the sense of accuracy vs. computational load) for the automatic detection of murmurs. This work was carried out using a database that contains 164 phonocardiographic recordings (81 normal and 83 records with murmurs). The database was segmented to extract 360 representative individual beats (180 per class).
Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real–world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.
Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8–20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10–20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20–14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image.
In recent years, many studies have examined filters for eliminating or reducing speckle noise, which is inherent to ultrasound images, in order to improve the metrological evaluation of their biomedical applications. In the case of medical ultrasound images, said noise can produce uncertainty in the diagnosis because details, such as limits and edges, should be preserved. Most algorithms can eliminate speckle noise, but they do not consider the conservation of these details. This paper describes, in detail, 27 techniques that mainly focus on the smoothing or elimination of speckle noise in medical ultrasound images. The aim of this study is to highlight the importance of improving said smoothing and elimination, which are directly related to several processes (such as the detection of regions of interest) described in other articles examined in this study. Furthermore, the description of this collection of techniques facilitates the implementation of evaluations and research with a more specific scope. This study initially covers several classical methods, such as spatial filtering, diffusion filtering, and wavelet filtering. Subsequently, it describes recent techniques in the field of machine learning focused on deep learning, which are not yet well known but greatly relevant, along with some modern and hybrid models in the field of speckle-noise filtering. Finally, five Full-Reference (FR) distortion metrics, common in filter evaluation processes, are detailed along with a compensation methodology between FR and Non-Reference (NR) metrics, which can generate greater certainty in the classification of the filters by considering the information of their behavior in terms of perceptual quality provided by NR metrics. INDEX TERMS Diffusion filtering, image pre-processing, metrological evaluation, spatial filtering, speckle noise, ultrasound images, wavelet filtering.
Soft metrology has been defined as a set of measurement techniques and models that allow the objective quantification of properties usually determined by human perception such as smell, sound or taste. The development of a soft metrology system requires the measurement of physical parameters and the construction of a model to correlate them with the variables that need to be quantified. This paper presents a review of indirect measurement with the aim of understanding the state of development in this area, as well as the current challenges and opportunities; and proposes to gather all the different designations under the term soft metrology, broadening its definition. For this purpose, the literature on indirect measurement techniques and systems has been reviewed, encompassing recent as well as a few older key documents to present a time line of development and map out application contexts and designations. As machine learning techniques have been extensively used in indirect measurement strategies, this review highlights them, and also makes an effort to describe the state of the art regarding the determination of uncertainty. This study does not delve into developments and applications for human and social sciences, although the proposed definition considers the use that this term has had in these areas.
Analysis of electromyography (EMG) signals is a necessary step in the diagnosis of neuromuscular diseases. Automatic classification systems can assist specialists and optimize the diagnostic process by applying time‐frequency analysis, fuzzy entropy, and neural networks to EMG signals in order to identify the presence of characteristics of a specific disorder, such as myopathy and amyotrophic lateral sclerosis. The performance of a decision support system depends on three important issues: the correct estimation of features from the EMG signal, the proper criteria for relevance analysis, and the learning process of the classification algorithm. In this paper, Discrete Wavelet Transform and Fuzzy Entropy are used to extract and select features from EMG signals, whereas Artificial Neural Networks are used to give the recognition result. The database used in this study is available for public use in EMGLAB, which is a website for sharing data, software, and information related to EMG decomposition. Results using the combination of these techniques show an accuracy around 98% for identifying EMG signals from three classes: healthy, patients with myopathy, or evidence of amyotrophic lateral sclerosis.
Abstract-An effective data representation methodology on high-dimension feature spaces is presented, which allows a better interpretation of subjacent physiological phenomena (namely, cardiac behavior related to cardiovascular diseases), and is based on search criteria over a feature set resulting in an increase in the detection capability of ischemic pathologies, but also connecting these features with the physiologic representation of the ECG. The proposed dimension reduction scheme consists of three levels: projection, interpretation, and visualization. First, a hybrid algorithm is described that projects the multidimensional data to a lower dimension space, gathering the features that contribute similarly in the meaning of the covariance reconstruction in order to find information of clinical relevance over the initial training space. Next, an algorithm of variable selection is provided that further reduces the dimension, taking into account only the variables that offer greater class separability, and finally, the selected feature set is projected to a 2-D space in order to verify the performance of the suggested dimension reduction algorithm in terms of the discrimination capability for ischemia detection. The ECG recordings used in this study are from the European ST-T database and from the Universidad Nacional de Colombia database. In both cases, over 99% feature reduction was obtained, and classification precision was over 99% using a five-nearest-neighbor classifier (5-NN).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.