With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters to create a more customized experience. However, there is a lack in the understanding of how avatars can embody a user’s signature expressions (i.e., user’s habitual facial expressions and facial appearance) that would provide an individualized experience. Our study focused on identifying elements that may affect the user’s social perception (similarity, familiarity, attraction, liking, and involvement) of customized virtual avatars engineered considering the user’s facial characteristics. We evaluated the participant’s subjective appraisal of avatars that embodied the participant’s habitual facial expressions or facial appearance. Results indicated that participants felt that the avatar that embodied their habitual expressions was more similar to them than the avatar that did not. Furthermore, participants felt that the avatar that embodied their appearance was more familiar than the avatar that did not. Designers should be mindful about how people perceive individuated virtual avatars in order to accurately represent the user’s identity and help users relate to their avatar.
The success of digital content depends largely on whether viewers empathize with stories and narratives. Researchers have investigated the elements that may elicit empathy from viewers. Empathic response involves affective and cognitive processes and is expressed through multiple verbal and nonverbal modalities. Specifically, eye movements communicate emotions and intentions and may reflect an empathic status. This study explores feature changes in eye movements when a viewer empathizes with the video’s content. Seven feature variables of eye movements (change of pupil diameter, peak pupil dilation, very short, mid, over long fixation duration, saccadic amplitude, and saccadic count) were extracted from 47 participants who viewed eight videos (four empathic videos and four non-empathic videos) distributed in a two-dimensional emotion axis (arousal and valence). The results showed that viewers’ saccadic amplitude and peak pupil dilation in the eigenvalues of eye movements increased in the empathic condition. The fixation time and pupil size change showed limited significance, and whether there were asymmetric pupil responses between the left and right pupils remained inconclusive. Our investigation suggests that saccadic amplitude and peak pupil dilation are reliable measures for recognizing whether viewers empathize with content. The findings provide physiological evidence based on eye movements that both affective and cognitive processes accompany empathy during media consumption.
Emotional intelligence (EI) is a critical social intelligence skill that refers to an individual’s ability to assess their own emotions and those of others. While EI has been shown to predict an individual’s productivity, personal success, and ability to maintain positive relationships, its assessment has primarily relied on subjective reports, which are vulnerable to response distortion and limit the validity of the assessment. To address this limitation, we propose a novel method for assessing EI based on physiological responses—specifically heart rate variability (HRV) and dynamics. We conducted four experiments to develop this method. First, we designed, analyzed, and selected photos to evaluate the ability to recognize emotions. Second, we produced and selected facial expression stimuli (i.e., avatars) that were standardized based on a two-dimensional model. Third, we obtained physiological response data (HRV and dynamics) from participants as they viewed the photos and avatars. Finally, we analyzed HRV measures to produce an evaluation criterion for assessing EI. Results showed that participants’ low and high EI could be discriminated based on the number of HRV indices that were statistically different between the two groups. Specifically, 14 HRV indices, including HF (high-frequency power), lnHF (the natural logarithm of HF), and RSA (respiratory sinus arrhythmia), were significant markers for discerning between low and high EI groups. Our method has implications for improving the validity of EI assessment by providing objective and quantifiable measures that are less vulnerable to response distortion.
Tracking consumer empathy is one of the biggest challenges for advertisers. Although numerous studies have shown that consumers’ empathy affects purchasing, there are few quantitative and unobtrusive methods for assessing whether the viewer is sharing congruent emotions with the advertisement. This study suggested a non-contact method for measuring empathy by evaluating the synchronization of micro-movements between consumers and people within the media. Thirty participants viewed 24 advertisements classified as either empathy or non-empathy advertisements. For each viewing, we recorded the facial data and subjective empathy scores. We recorded the facial micro-movements, which reflect the ballistocardiography (BCG) motion, through the carotid artery remotely using a camera without any sensory attachment to the participant. Synchronization in cardiovascular measures (e.g., heart rate) is known to indicate higher levels of empathy. We found that through cross-entropy analysis, the more similar the micro-movements between the participant and the person in the advertisement, the higher the participant’s empathy scores for the advertisement. The study suggests that non-contact BCG methods can be utilized in cases where sensor attachment is ineffective (e.g., measuring empathy between the viewer and the media content) and can be a complementary method to subjective empathy scales.
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.