: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. We call the new definition. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. The definition proposed is almost free of parameters. The most common ones are the scale factor and the embedding dimension . In our definition, the scale factor is totally eliminated and the importance of is significantly reduced. The proposed method presents increased stability and discriminating power. After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.: A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. We call the new definition. Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. The definition proposed is almost free of parameters. The most common ones are the scale factor and the embedding dimension . In our definition, the scale factor is totally eliminated and the importance of is significantly reduced. The proposed method presents increased stability and discriminating power. After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates...
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the kd-trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.
Falling in elderly is a worldwide major problem because it can lead to severe injuries, and even sudden death. Fall risk prediction would provide rapid intervention, as well as reducing the over burden of healthcare systems. Such prediction is currently performed by means of clinical scales. Among them, the Tinetti Scale is one of the better established and mostly used in clinical practice. In this work, we proposed an automatic method to assess the Tinetti scores using a wearable accelerometer. The balance and gait characteristics of 13 elderly subjects have been scored by an expert clinician while performing 8 different motor tasks according to the Tinetti Scale protocol. Two statistical analysis were selected. First, a linear regression study was performed between the Tinetti scores and 8 features (one feature for each task). Second, the generalization quality of the regression model was assessed using a Leave-One SubjectOut approach. The multiple linear regression provided a high correlation between the Tinetti scores and the features proposed (adj. R(2) = 0.948; p = 0.003). Moreover, six of the eight features added statistically significantly to the prediction of the scores (p <; 0.05). When testing the generalization capability of the model, a moderate linear correlation was obtained (R(2) = 0.67; p <; 0.05). The results suggested that the automatic method might be a promising tool to assess the falling risk of older individuals.
Handwritten recognition has drawn profound attention since decades ago due to its numerous potential applications in real life. Research on unconstrained handwritten recognition in some languages has achieved attractive advancement, but it lags behind for Bengali even though it is the major language spoken by about 230 million people in the Indian subcontinent, and even the first and official language of Bangladesh. Recently, the use of convolutional neural network (CNN) has been reported with high accuracy in pattern recognition and computer vision problems. The main purpose of this study is to provide an architecture of a CNN to improve the accuracy of handwritten Bengali numerals recognition (HBNR) and compare its performance with the existing ones. We proposed a new CNN architecture, VGG-11M, which improves an existing one (VGG-11). The normalized and rescaled images of each numeral were augmented by different transformation operations to increase the training samples and to add diversity in the dataset. Then, the images were used to train the proposed VGG-11M model. The recognition accuracy of the developed system was tested on both training and test sets of three publicly available handwritten Bengali numerals database at different resolutions. Finally the performance of the model was compared with four other architectures (LeNet-5, ResNet-50, VGG-11, and VGG-16). The highest accuracy 99.80%, 99.66%, and 99.25% was obtained using the proposed architecture on the test set of ISI, CMATERDB, and NUMTADB dataset, respectively, at resolution 32 × 32. The proposed VGG-11M outperformed the existing architectures of CNN on HBNR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.