Speech is the most natural way for human communication, carrying the emotional state of the speaker that plays an important role in social interaction. Currently, many instant messaging apps offer the possibility of exchanging voice audios with other users. As a result, a great amount of voice data is generated every day, representing a new challenging approach for speech emotion recognition in real environments. In this study, we investigated emotion recognition from voice messages recorded in the wild using machine-learning algorithms. Unlike most research in this field, which use databases based on emotions evoked in lab environments, simulated by actors or subjectively selected from radio or TV talks, we created an ecological speech dataset with audios from real WhatsApp conversations of 30 Spanish speakers. Four external evaluators labelled each audio in terms of arousal and valence using the Self-Assessment Manikin (SAM) procedure. Pre-processing techniques were applied to the audios and different time and frequency domain features were extracted. Supervised machine learning classifiers were computed using feature reduction and hyper-parameter tuning in order to recognize the affective state of each voice message. The best recognition rate was obtained with Support Vector Machines, achieving 71.37% along the arousal dimension and 70.73% along the valence dimension. These results support the use of emotion recognition models on daily communication apps, helping to understand social human behavior and their interactions with devices in the real world.
Many symptoms of the autism spectrum disorder (ASD) are evident in early infancy, but ASD is usually diagnosed much later by procedures lacking objective measurements. It is necessary to anticipate the identification of ASD by improving the objectivity of the procedure and the use of ecological settings. In this context, atypical motor skills are reaching consensus as a promising ASD biomarker, regardless of the level of symptom severity. This study aimed to assess differences in the whole-body motor skills between 20 children with ASD and 20 children with typical development during the execution of three tasks resembling regular activities presented in virtual reality. The virtual tasks asked to perform precise and goal-directed actions with different limbs vary in their degree of freedom of movement. Parametric and non-parametric statistical methods were applied to analyze differences in children’s motor skills. The findings endorsed the hypothesis that when particular goal-directed movements are required, the type of action could modulate the presence of motor abnormalities in ASD. In particular, the ASD motor abnormalities emerged in the task requiring to take with the upper limbs goal-directed actions with low degree of freedom. The motor abnormalities covered (1) the body part mainly involved in the action, and (2) further body parts not directly involved in the movement. Findings were discussed against the background of atypical prospective control of movements and visuomotor discoordination in ASD. These findings contribute to advance the understanding of motor skills in ASD while deepening ecological and objective assessment procedures based on VR.
Attachment styles are known to have significant associations with mental and physical health. Specifically, insecure attachment leads individuals to higher risk of suffering from mental disorders and chronic diseases. The aim of this study is to develop an attachment recognition model that can distinguish between secure and insecure attachment styles from voice recordings, exploring the importance of acoustic features while also evaluating gender differences. A total of 199 participants recorded their responses to four open questions intended to trigger their attachment system using a web-based interrogation system. The recordings were processed to obtain the standard acoustic feature set eGeMAPS, and recursive feature elimination was applied to select the relevant features. Different supervised machine learning models were trained to recognize attachment styles using both gender-dependent and gender-independent approaches. The gender-independent model achieved a test accuracy of 58.88%, whereas the gender-dependent models obtained 63.88% and 83.63% test accuracy for women and men respectively, indicating a strong influence of gender on attachment style recognition and the need to consider them separately in further studies. These results also demonstrate the potential of acoustic properties for remote assessment of attachment style, enabling fast and objective identification of this health risk factor, and thus supporting the implementation of largescale mobile screening systems.
Online Social Media (OSM) are dominating the wide range of Internet services. Due to their vast audience, it is crucial to evaluate the interpersonal trust among OSM users that can identify reliable sources of information, the meaningfulness of a relationship, or the trustworthiness of other users. SentiTrust is an innovative trust model for Decentralized Online Social Networks that is based on AI-powered Sentiment Analysis. It enriches the trust definition by exploiting important features that are enabled because of the adoption of Social Media through mobile devices. The model can be easily extended and customized according to the scenario of interest. The sentiment analysis component has been tested by involving 30 participants who completed several guided tasks using a social media application while their electrodermal activity and rate responses were measured. The results suggest that low arousal states are related to receiving happy faces and to sending more messages per minute. Furthermore, positive interactions result in shorter interactions and multimedia exchanges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.