How the human brain controls hand movements to carry out different tasks is still debated. The concept of synergy has been proposed to indicate functional modules that may simplify the control of hand postures by simultaneously recruiting sets of muscles and joints. However, whether and to what extent synergic hand postures are encoded as such at a cortical level remains unknown. Here, we combined kinematic, electromyography, and brain activity measures obtained by functional magnetic resonance imaging while subjects performed a variety of movements towards virtual objects. Hand postural information, encoded through kinematic synergies, were represented in cortical areas devoted to hand motor control and successfully discriminated individual grasping movements, significantly outperforming alternative somatotopic or muscle-based models. Importantly, hand postural synergies were predicted by neural activation patterns within primary motor cortex. These findings support a novel cortical organization for hand movement control and open potential applications for brain-computer interfaces and neuroprostheses.DOI: http://dx.doi.org/10.7554/eLife.13420.001
Bipolar disorders are characterized by an unpredictable behavior, resulting in depressive, hypomanic or manic episodes alternating with euthymic states. A multi-parametric approach can be followed to estimate mood states by integrating information coming from different physiological signals and from the analysis of voice. In this work we propose an algorithm to estimate speech features from running speech with the aim of characterizing the mood state in bipolar patients. This algorithm is based on an automatic segmentation of speech signals to detect voiced segments, and on a spectral matching approach to estimate pitch and pitch changes. In particular average pitch, jitter and pitch standard deviation within each voiced segment, are estimated. The performances of the algorithm are evaluated on a speech database, which includes an electroglottographic signal. A preliminary analysis on subjects affected by bipolar disorders is performed and results are discussed.
da inserireBipolar disorders are characterized by a mood swing, ranging from mania to depression. A system that could monitor and eventually predict these changes would be useful to improve therapy and avoid dangerous events. Speech might convey relevant information about subjects' mood and there is a growing interest to study its changes in presence of mood disorders. In this work we present an automatic method to characterize fundamental frequency (F0) dynamics in voiced part of syllables. The method performs a segmentation of voiced sounds from running speech samples and estimates two categories of features. The first category is borrowed from Taylor's Tilt intonational model. However, the meaning of the proposed features is different from the meaning of Taylor's ones since the former are estimated from all voiced segments without performing any analysis of intonation. A second category of features takes into account the speed of change of F0. In this work, the proposed features are first estimated from an emotional speech database. Then, an analysis on speech samples acquired from eleven psychiatric patients experiencing different mood states, and eighteen healthy control subjects is introduced. Subjects had to perform a text reading task and a picture commenting task. The results of the analysis on the emotional speech database indicate that the proposed features can discriminate between high and low arousal emotions. This was verified both at single subject and group level. An intra-subject analysis was performed on bipolar patients and it highlighted significant changes of the features with different mood states, although this was not observed for all the subjects. The directions of the changes estimated for different patients experiencing the same mood swing, were not coherent and were task-dependent. Interestingly, a single-subject analysis performed on healthy controls and on bipolar patients recorded twice with the same mood label, resulted in a very small number of significant differences. In particular a very good specificity was highlighted for the Taylor-inspired features and for a subset of the second category of features, thus strengthening the significance of the results obtained with patients. Even if the number of enrolled patients is small, this work suggests that the proposed features might give a relevant contribution to the demanding research field of speech-based mood classifiers. Moreover, the results here presented indicate that a model of speech changes in bipolar patients might be subject-specific and that a richer characterization of subject status could be necessary to explain the observed variability
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.
This study reports on a novel method to detect and reduce the contribution of movement artifact (MA) in electrocardiogram (ECG) recordings gathered from horses in free movement conditions. We propose a model that integrates cardiovascular and movement information to estimate the MA contribution. Specifically, ECG and physical activity are continuously acquired from seven horses through a wearable system. Such a system employs completely integrated textile electrodes to monitor ECG and is also equipped with a triaxial accelerometer for movement monitoring. In the literature, the most used technique to remove movement artifacts, when noise bandwidth overlaps the primary source bandwidth, is the adaptive filter. In this study we propose a new algorithm, hereinafter called Stationary Wavelet Movement Artifact Reduction (SWMAR), where the Stationary Wavelet Transform (SWT) decomposition algorithm is employed to identify and remove movement artifacts from ECG signals in horses. A comparative analysis with the Normalized Least Mean Square Adaptive Filter technique (NLMSAF) is performed as well. Results achieved on seven hours of recordings showed a reduction greater than 40% of MA percentage (between before- and after- the application of the proposed algorithm). Moreover, the comparative analysis with the NLMSAF, applied to the same ECG recordings, showed a greater reduction of MA percentage in favour of SWMAR with a statistical significant difference (p–value < 0.0.5).
Abstract:This study reports on a preliminary estimation of the human-horse interaction through the analysis of the heart rate variability (HRV) in both human and animal by using the dynamic time warping (DTW) algorithm. Here, we present a wearable system for HRV monitoring in horses. Specifically, we first present a validation of a wearable electrocardiographic (ECG) monitoring system for horses in terms of comfort and robustness, then we introduce a preliminary objective estimation of the human-horse interaction. The performance of the proposed wearable system for horses was compared with a standard system in terms of movement artifact (MA) percentage. Seven healthy horses were monitored without any movement constraints. As a result, the lower amount of MA% of the wearable system suggests that it could be profitably used for reliable measurement of physiological parameters related to the autonomic nervous system (ANS) activity in horses, such as the HRV. Human-horse interaction estimation was achieved through the analysis of their HRV time series. Specifically, DTW was applied to estimate dynamic coupling between human and horse in a group of fourteen human subjects and one horse. Moreover, a support vector machine (SVM) classifier was able to recognize the three classes of interaction with an accuracy greater than 78%. Preliminary significant results showed the discrimination of three distinct real human-animal interaction levels. These results open the measurement and characterization of the already empirically-proven relationship between human and horse.
Voice signal has been widely investigated to characterize mood and emotional states. A further interesting dimension could regard the personality traits. The relationship between personality traits and specific speech features is known, however this topic requires further investigation. Specifically, most studies are focused on perceived personality traits, without adopting dedicated personality tests. Moreover, the relationship among speaker personality traits and specific speech features have still to be clarified. In this study, a correlational analysis between some speech-related features and the personality traits, as described by the Zuckerman-Kuhlman model and the Toronto Alexithymia Scale, is performed. An experimental protocol, consisting of two structured speech tasks, was administered to eighteen healthy subjects. Speech features were estimated to describe fundamental frequency (F 0 ) and voice quality related features from whole speech recording and tilt-related features, describing F 0 dynamics at voiced segment level. Significant correlations among personality traits and speech features were observed using both feature sets. Interestingly, the adopted speech task was found to influence the obtained results. Specifically, no feature reports the same significant correlation in both adopted tasks. The impact of personality traits and speech production studies on the characterization of mental disorders and the estimation of emotional/mood state of the speaker are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.