Faceteq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, Faceteq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. Emteq founded the Faceteq project with the aim to provide a human-centered additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed demonstration will exhibit the hardware and its functionality by allowing attendees to experience three of the showcasing applications we developed this year.
In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.
We present a system for sensing and reconstructing facial expressions of the virtual reality (VR) head-mounted display (HMD) user. The HMD occludes a large portion of the user's face, which makes most existing facial performance capturing techniques intractable. To tackle this problem, a novel hardware solution with electromyography (EMG) sensors being attached to the headset frame is applied to track facial muscle movements. For realistic facial expression recovery, we first reconstruct the user's 3D face from a single image and generate the personalized blendshapes associated with seven facial action units (AUs) on the most emotionally salient facial parts (ESFPs). We then utilize preprocessed EMG signals for measuring activations of AU-coded facial expressions to drive pre-built personalized blendshapes. Since facial expressions appear as important nonverbal cues of the subject's internal emotional states, we further investigate the relationship between six basic emotions -anger, disgust, fear, happiness, sadness and surprise, and detected AUs using a fern classifier. Experiments show the proposed system can accurately sense and reconstruct high-fidelity common facial expressions while providing useful information regarding the emotional state of the HMD user.
Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson’s correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device.
Using a novel wearable surface electromyography (sEMG), we investigated induced affective states by measuring the activation of facial muscles traditionally associated with positive (left/right orbicularis and left/right zygomaticus) and negative expressions (the corrugator muscle). In a sample of 38 participants that watched 25 affective videos in a virtual reality environment, we found that each of the three variables examined—subjective valence, subjective arousal, and objective valence measured via the validated video types (positive, neutral, and negative)—sEMG amplitude varied significantly depending on video content. sEMG aptitude from “positive muscles” increased when participants were exposed to positively valenced stimuli compared with stimuli that was negatively valenced. In contrast, activation of “negative muscles” was elevated following exposure to negatively valenced stimuli compared with positively valenced stimuli. High arousal videos increased muscle activations compared to low arousal videos in all the measured muscles except the corrugator muscle. In line with previous research, the relationship between sEMG amplitude as a function of subjective valence was V-shaped.
The paper presents the emteqPRO tm system, which uses a facemounted multi-sensor mask to measure the facial physiological responses, facial muscle activations, and motions from the user. These responses are then analyzed by machine learning algorithms to recognize and better understand the user's affective state and context, i.e., emotions, arousal, valence, stress response, activities, etc. The system can work by itself, as an open mask, or can be combined with a commercial Virtual Reality head mounted display. It comprises 3 sensor modalities: a 7-contact f-EMG sensor, a PPG sensor, and a 3-axis IMU, enabling it to measure the affective state of the user in real time. We will demonstrate how the system is used in practice in a Virtual Reality environment. This newly developed technology has the potential to significantly improve the way we collect data, design experiences, and interact within Virtual, Mixed and Augmented Realities.
Virtual Reality (VR) enables the simulation of ecologically validated scenarios, which are ideal for studying behaviour in controllable conditions. Physiological measures captured in these studies provide a deeper insight into how an individual responds to a given scenario. However, the combination of the various biosensing devices presents several challenges, such as efficient time synchronisation between multiple devices, replication between participants and settings, as well as managing cumbersome setups. Additionally, important salient facial information is typically covered by the VR headset, requiring a different approach to facial muscle measurement. These challenges can restrict the use of these devices in laboratory settings. This paper describes a solution to this problem. More specifically, we introduce the emteqPRO system which provides an all-in-one solution for the collection of physiological data through a multi-sensor array built into the VR headset. EmteqPRO is a ready to use, flexible sensor platform enabling convenient, heterogenous, and multimodal emotional research in VR. It enables the capture of facial muscle activations, heart rate features, skin impedance, and movement data—important factors for the study of emotion and behaviour. The platform provides researchers with the ability to monitor data from users in real-time, in co-located and remote set-ups, and to detect activations in physiology that are linked to arousal and valence changes. The SDK (Software Development Kit), developed specifically for the Unity game engine enables easy integration of the emteqPRO features into VR environments.Code available at: (https://github.com/emteqlabs/emteqvr-unity/releases)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.