A cluster of research in Affective Computing suggests that it is possible to infer some characteristics of users’ affective states by analyzing their electrophysiological activity in real-time. However, it is not clear how to use the information extracted from electrophysiological signals to create visual representations of the affective states of Virtual Reality (VR) users. Visualization of users’ affective states in VR can lead to biofeedback therapies for mental health care. Understanding how to visualize affective states in VR requires an interdisciplinary approach that integrates psychology, electrophysiology, and audio-visual design. Therefore, this review aims to integrate previous studies from these fields to understand how to develop virtual environments that can automatically create visual representations of users’ affective states. The manuscript addresses this challenge in four sections: First, theories related to emotion and affect are summarized. Second, evidence suggesting that visual and sound cues tend to be associated with affective states are discussed. Third, some of the available methods for assessing affect are described. The fourth and final section contains five practical considerations for the development of virtual reality environments for affect visualization.
Affective states can propagate in a group of people and influence their ability to judge others' affective states. In the present paper, we present a simple mathematical model to describe this process in a three-dimensional affective space. We obtained data from 67 participants randomly assigned to two experimental groups. Participants watched either an upsetting or uplifting video previously calibrated for this goal. Immediately, participants reported their baseline subjective affect in three dimensions:(1) positivity, (2) negativity, and (3) arousal. In a second phase, participants rated the affect they subjectively judged from 10 target angry faces and ten target happy faces in the same three-dimensional scales. These judgments were used as an index of participant's affective state after observing the faces. Participants' affective responses were subsequently mapped onto a simple three-dimensional model of emotional contagion, in which the shortest distance between the baseline self-reported affect and the target judgment was calculated. The results display a double dissociation: negatively induced participants show more emotional contagion to angry than happy faces, while positively induced participants show more emotional contagion to happy than angry faces. In sum, emotional contagion exerted by the videos selectively affected judgments of the affective state of others' faces. We discuss the directionality of emotional contagion to faces, considering whether negative emotions are more easily propagated than positive ones. Additionally, we comment on the lack of significant correlations between our model and standardized tests of empathy and emotional contagion.
This paper considers the problem of diagnosing people's driving skills under real driving conditions using GPS data and video records. For this real environment implementation, a brand new intelligent driving diagnosis system based on fuzzy logic was developed. This system seeks to propose an abstraction of expert driving criteria for driving assessment. The analysis takes into account GPS signals such as: position, velocity, accelerations and vehicle yaw angle; because of its relation with drivers' maneuvers.In that sense, this work presents in the first place, the proposed scheme for the intelligent driving diagnosis agent in terms of its own characteristics properties, which explain important considerations about how an intelligent agent must be conceived. Secondly, it attempts to explain the scheme for the implementation of the intelligent driving diagnosis agent based on its fuzzy logic algorithm, which takes into account the analysis of real-time telemetry signals and proposed set of driving diagnosis rules for the intelligent driving diagnosis, based on a quantitative abstraction of some traffic laws and some secure driving techniques.Experimental testing has been performed in driving conditions. All tested drivers performed the driving task on real streets. The testing results show that our intelligent driving diagnosis system allows quantitative qualifications of driving performance with a high degree of reliability.
One of the challenges during the post-COVID pandemic era will be to foster social connections between people. Previous research suggests that people who is able to regulate their emotions tends to have better social connections with others. Additional studies indicate that it is possible to train the ability to regulate emotions voluntarily, using a procedure that involves three steps: (1) asking participants to evoke an autobiographical memory associated with a positive emotion; (2) analyze participants’ brain activity in real-time to estimate their emotional state; and (3) provide visual feedback about the emotions evoked with the autobiographical memory. However, there is not enough research on how to provide the visual feedback required for the third step. Therefore, this manuscript introduces five virtual environments that can be used to provide emotional visual feedback. Each virtual environment was designed based on evidence found in previous studies, suggesting that there are visual cues, such as colors, shapes and motion patterns, that tend to be associated with emotions. In each virtual environment, the visual cues changed, intending to represent five emotional categories. An experiment was conducted to analyze the emotions that participants associated with the virtual environments. The results indicate that each environment is associated with the emotional categories that they were meant to represent.
This manuscript explores the development of a technique for detecting the affective states of Virtual Reality (VR) users in real-time. The technique was tested with data from an experiment where 18 participants observed 16 videos with emotional content inside a VR home theater, while their electroencephalography (EEG) signals were recorded. Participants evaluated their affective response toward the videos in terms of a three-dimensional model of affect. Two variants of the technique were analyzed. The difference between both variants was the method used for feature selection. In the first variant, features extracted from the EEG signals were selected using Linear Mixed-Effects (LME) models. In the second variant, features were selected using Recursive Feature Elimination with Cross Validation (RFECV). Random forest was used in both variants to build the classification models. Accuracy, precision, recall and F1 scores were obtained by cross-validation. An ANOVA was conducted to compare the accuracy of the models built in each variant. The results indicate that the feature selection method does not have a significant effect on the accuracy of the classification models. Therefore, both variations (LME and RFECV) seem equally reliable for detecting affective states of VR users. The mean accuracy of the classification models was between 87% and 93%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.