Facial expressions are the most preeminent means of conveying one’s emotions and play a significant role in interpersonal communication. Researchers are in pursuit of endowing machines with the ability to interpret emotions from facial expressions as that will make human-computer interaction more efficient. With the objective of effective affect cognition from visual information, we present two dynamic descriptors that can recognise seven principal emotions. The variables of the appearance-based descriptor, FlowCorr, indicate intra-class similarity and inter-class difference by quantifying the degree of correlation of optical flow associated with the image pair and each pre-designed template describing the motion pattern associated with different expressions. The second shape-based descriptor, dyn-HOG, finds the HOG values of the difference image derived by subtracting neutral face from emotional face, and is demonstrated to be more discriminative than previously used static HOG descriptors for classifying facial expressions. Recognition accuracies with multi-class support vector machine obtained on the CK+ and KDEF-dyn datasets are competent with the results of state-of-the-art techniques and empirical analysis of human cognition of emotions.
Facial expressions are integral part of non-verbal paralinguistic communication as they provide cues significant in perceiving one’s emotional state. Assessment of emotions through expressions is an active research domain in computer vision due to its potential applications in multi-faceted domains. In this work, an approach is presented where facial expressions are modelled and analyzed with dense optical flow derived divergence and curl templates that embody the ideal motion pattern of facial features pertaining to unfolding of an expression on the face. Two types of classification schemes based on multi-class support vector machine and k-nearest neighbour are employed for evaluation. Promising results obtained from comparative analysis of the proposed approach with state-of-the-art techniques on the Extended Cohn Kanade database and with human cognition and pre-trained Microsoft face application programming interface on the Karolinska Directed Emotional Faces database validate the efficiency of the approach.
Facial expressions (FEs) are one of the most preeminent means of conveying one’s emotions and are pivotal to nonverbal communication. Potential applications in a wide range of areas in computer vision have lent a strong impetus to research in the domain of automatic facial expression recognition. This work discusses the effectiveness of two optical flow-based features for modeling the FEs associated with prototypic emotions based on the pattern of nonrigid deformable motion of facial components occurring during their portrayal. The discernible motion patterns are categorized into distinct discrete classes with the descriptive features indicating the global spatial distribution of deformation derived from the dense optical flow field associated with emotional and neutral face images. Results obtained with evaluation on images and video clips taken from Extended Cohn-Kanade, Japanese Female Facial Expressions, and Dynamic Karolinska Directed Emotional Faces datasets with multi-class support vector machine and [Formula: see text]-nearest neighbor classifiers are competent with the state-of-the-art techniques and concordant with empirical psychological studies in emotion science.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.