The characteristic forms of facial expressions of the emotional states of a person are typical of a rather large degree of generalization on the basis of common physiological structures and the location of the muscles that form the human face. This circumstance is one of the main reasons for the commonality of human manifestations of emotions that are reflected in the face. By the nature and form of facial expressions on the face with high probability, it is possible to determine the emotional state of a person with some correction on the part of the cultural characteristics and traditions of certain groups. In accordance with the existence of common mimic forms of emotional manifestations, an approach is proposed to create a model of recognition of emotional manifestations on the face of a person with relatively low requirements for the means of photo, video-fixation and acceptable speed in the video stream. The creation of the model is based on the implementation of the hyperplane classification of mimic manifestations of major emotional states. One of the main advantages of the proposed approach is the small computational complexity that allows realizing the recognition of the changes in people’s emotional state without any special equipment (for low-resolution or long-distance video cameras). In addition, the model developed on the basis of the proposed approach allows obtaining proper recognition accuracy with low requirements for quality image characteristics, which allows extending the scope of practical application to a great extent. One example of practical application is control over the drivers in the process of driving the vehicle, complex production operators, and other automated visual surveillance systems. The set of detected emotional states is formed in accordance with the set tasks and gives the opportunity to focus on the recognition of mimic forms and group characteristic structural manifestations based on the set of distinguished characteristic features.
Emotional expressions serve a crucial role in interpersonal communication between people while improving social life. In particular, information safety systems for visual surveillance that aim to recognize human emotional facial states are highly relevant today. In this regard, this study is devoted to the problem of identifying the main criteria for expressing the face of emotional manifestations for the possibility of recognizing without the use of specialized equipment, for example, security surveillance cameras with low resolution. In this work, we propose informational technology to define the face’s areas that reproduce the face’s emotional look. The input data from the proposed information technology is a set of videos with detected faces with the primary emotional states reproduced on them. At first, normalization of the faces of images is conducted to compare them in one base. It is executed by centering the face area and normalizing the distance between the eyes. Based on the analysis of point features moving in the set of input images, information points are then allocated (i.e., those points whose movement in the person’s emotional expression is the most significant). At the final stage, the areas of the face (with different bias thresholds) are determined, the changes of which form a visual perception of emotions. For each selected region, a set of possible states is formed. In conclusion, the behavior of point-specific features of a person under the manifestation of specific emotions is explored experimentally, and high-quality indicators for these emotions are highlighted. According to the study results, it is preferable to create a software product based on qualitative criteria for assessing the main areas of the face to determine the mimic expression of emotions.
The presented paper proposes a novel computational model for generating facial expressions that mimic human emotional states. The authors aim to create a system that can generate realistic facial expressions to be used in human-robot interactions. The proposed model is based on the Facial Action Coding System, a widely used tool for describing facial expressions. FACS is used in this study to identify the muscles involved in each facial expression and the degree to which each muscle is activated. Several machine-learning techniques were utilized to learn the relationships between facial muscle activations and emotional states. In particular, a hyperplane classification was employed in the system for facial expressions representing major emotional states. The model’s primary advantage lies in its low computational complexity, which enables it to recognize changes in human emotional states through facial expressions without requiring specialized equipment, such as low-resolution or long-distance video cameras. The proposed approach is intended for use in control systems for various purposes, including security systems or monitoring drivers while operating vehicles. It was investigated that the proposed model could generate facial expressions similar to those produced by humans and that these expressions were recognized as conveying the intended emotional state by human observers. The authors also investigated the effect of different factors on the generation of facial expressions. Overall, the proposed model represents a promising approach for generating realistic facial expressions that mimic human emotional states and could have applications in improving security compliance in sensitive environments. However, carefully considering and managing potential ethical issues will be necessary to ensure the responsible use of this technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.