Horizontal visibility graph (HVG) motifs have been recently introduced to analyze the dynamical information encoded by biological signals. However, the result of the analysis strongly depends on the selected window size of the motifs. Different sizes ranging from 3 to 5 have been previously used, but such small window sizes are insufficient to cope with the complexity of biological systems and often fail to extract salient features of the encoded information. It is known that larger window size increases the total number of possible motifs, and it leads to the distribution of the statistics into too many motifs, which causes each individual motif to contain too little information and make it even more difficult to reliably detect system dynamics. To resolve this problem, we group the motifs based on the number of edges. Using the grouped motifs, we propose grouped horizontal visibility entropy (GHVE) to quantify the complexity based on the probability distribution of the observations within these groups. We apply GHVE to quantify the complexity of simulated white and 1/f noise. The results reveal that the 1/f noise time series exhibits a higher complexity than white noise time series, which indicates that the 1/f noise is structurally more complex than white Gaussian noise. We apply the method for analyzing interbeat intervals time series. The results show that the proposed GHVE measure is more accurate in distinguishing healthy and pathological subjects than its non-grouped counterpart HVG. It is, therefore, better suited to detect changes in aging, disease severity, and activity levels (sleep and wake period). INDEX TERMS Complex network, HVG motifs, HRV analysis, time series data.
Classification of interbeat interval time-series which fluctuates in an irregular and complex manner is very challenging. Typically, entropy methods are employed to quantify the complexity of the time-series for classifying. Traditional entropy methods focus on the frequency distribution of all the observations in a time-series. This requires a relatively long time-series with at least a couple of thousands of data points, which limits their usages in practical applications. The methods are also sensitive to the parameter settings. In this paper, we propose a conceptually new approach called attention entropy, which pays attention only to the key observations. Instead of counting the frequency of all observations, it analyzes the frequency distribution of the intervals between the key observations in a time-series. The advantages of the attention entropy are that it does not need any parameter to tune, is robust to the time-series length, and requires only linear time to compute. Experiments show that it outperforms fourteen state-of-the-art entropy methods evaluated by real-world datasets. It achieves average classification accuracy of AUC=0.71 while the second-best method, multiscale entropy, achieves AUC=0.62 when classifying four groups of people with a time-series length of 100.
This paper presents a new algorithm for the denoising of images corrupted with random-valued impulse noise (RVIN). It employs a switching approach that identifies the noisy pixels in the first stage and then estimates their intensity values to restore them. Local statistics of the textons in distinct orientations of the sliding window are exploited to identify the corrupted pixels in an iterative manner; using an adaptive threshold range. Textons are formed by using an isometric grid of minimum local distance that preserves the texture and edge pixels of an image, effectively. At the noise filtering stage, fuzzy rules are used to obtain the noise-free pixels from the proposed tridirectional pixels to estimate the intensity values of identified corrupted pixels. The performance of the proposed denoising algorithm is evaluated on a variety of standard gray-scale images under various intensities of RVIN by comparing it with state-of-the-art denoising methods. The proposed denoising algorithm also has robust denoising and restoration power on biomedical images such as, MRI, X-Ray and CT-Scan. The extensive simulation results based on both quantitative measures and visual representations depict the superior performance of the proposed denoising algorithm for various noise intensities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.