We present a system for sensing and reconstructing facial expressions of the virtual reality (VR) head-mounted display (HMD) user. The HMD occludes a large portion of the user's face, which makes most existing facial performance capturing techniques intractable. To tackle this problem, a novel hardware solution with electromyography (EMG) sensors being attached to the headset frame is applied to track facial muscle movements. For realistic facial expression recovery, we first reconstruct the user's 3D face from a single image and generate the personalized blendshapes associated with seven facial action units (AUs) on the most emotionally salient facial parts (ESFPs). We then utilize preprocessed EMG signals for measuring activations of AU-coded facial expressions to drive pre-built personalized blendshapes. Since facial expressions appear as important nonverbal cues of the subject's internal emotional states, we further investigate the relationship between six basic emotions -anger, disgust, fear, happiness, sadness and surprise, and detected AUs using a fern classifier. Experiments show the proposed system can accurately sense and reconstruct high-fidelity common facial expressions while providing useful information regarding the emotional state of the HMD user.
Previous work has shown that perceptual texture similarity and relative attributes cannot be well described by computational features. In this paper, we propose to predict human's visual perception of texture images by learning a nonlinear mapping from computational feature space to perceptual space. Hand-crafted features and deep features, which were successfully applied in texture classification tasks, were extracted and used to train Random Forest and rankSVM models against perceptual data from psychophysical experiments. Three texture datasets were used to test our proposed method and the experiments show that the predictions of such learnt models are in high correlation with human's results.
The eye center localization is a crucial requirement for various human-computer interaction applications such as eye gaze estimation and eye tracking. However, although significant progress has been made in the field of eye center localization in recent years, it is still very challenging for tasks under the significant variability situations caused by different illumination, shape, color and viewing angles. In this paper, we propose a hybrid regression and isophote curvature for accurate eye center localization under low resolution. The proposed method first applies the regression method, which is called Supervised Descent Method (SDM), to obtain the rough location of eye region and eye centers. SDM is robust against the appearance variations in the eye region. To make the center points more accurate, isophote curvature method is employed on the obtained eye region to obtain several candidate points of eye center. Finally, the proposed method selects several estimated eye center locations from the isophote curvature method and SDM as our candidates and a SDM-based means of gradient method further refine the candidate points. Therefore, we combine regression and isophote curvature method to achieve robustness and accuracy. In the experiment, we have extensively evaluated the proposed method on the two public databases which are very challenging and realistic for eye center localization and compared our method with existing state-of-the-art methods. The results of the experiment confirm that the proposed method outperforms the state-of-the-art methods with a significant improvement in accuracy and robustness and has less computational complexity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.