In an effort to demonstrate that the verbal labeling of emotional experiences obeys lawful principles, we tested the feasibility of using an expert system called the Geneva Emotion Analyst (GEA), which generates predictions based on an appraisal theory of emotion. Several thousand respondents participated in an Internet survey that applied GEA to self-reported emotion experiences. Users recalled appraisals of emotion-eliciting events and labeled the experienced emotion with one or two words, generating a massive data set on realistic, intense emotions in everyday life. For a final sample of 5969 respondents we show that GEA achieves a high degree of predictive accuracy by matching a user’s appraisal input to one of 13 theoretically predefined emotion prototypes. The first prediction was correct in 51% of the cases and the overall diagnosis was considered as at least partially correct or appropriate in more than 90% of all cases. These results support a component process model that encourages focused, hypothesis-guided research on elicitation and differentiation, memory storage and retrieval, and categorization and labeling of emotion episodes. We discuss the implications of these results for the study of emotion terms in natural language semantics.
Abstract-Appraisal theory of emotion claims that emotions are not caused by "raw" stimuli, as such, but by the subjective evaluation (appraisal) of those stimuli. Studies that analyzed this relation have been dominated by linear models of analysis. These methods are not ideally suited to examine a basic assumption of many appraisal theories, which is that appraisal criteria interact to differentiate emotions, and hence show nonlinear effects. Studies that did model interactions were either limited in scope or exclusively theorydriven simulation attempts. In the present study, we improve on these approaches using data-driven methods from the field of machine learning. We modeled a categorical emotion response as a function of 25 appraisal predictors, using a large data set on recalled emotion experiences (5,901 cases). A systematic comparison of machine learning models on these data supported the interactive nature of the appraisal-emotion relationship, with the best nonlinear model significantly outperforming the best linear model. The interaction structure was found to be moderately hierarchical. Strong main effects of intrinsic valence and goal compatibility appraisal differentiated positive from negative emotions, while more specific emotions (e.g., pride, irritation, despair) were differentiated by interactions involving agency appraisal and norm appraisal.
The common conceptual understanding of emotion is that they are multi-componential, including subjective feelings, appraisals, psychophysiological activation, action tendencies, and motor expressions. Emotion perception, however, has traditionally been studied in terms of emotion labels, such as "happy", which do not clearly indicate whether one, some, or all emotion components are perceived. We examine whether emotion percepts are multi-componential and extend previous research by using more ecologically valid, dynamic, and multimodal stimuli and an alternative response measure. The results demonstrate that observers can reliably infer multiple types of information (subjective feelings, appraisals, action tendencies, and social messages) from complex emotion expressions. Furthermore, this finding appears to be robust to changes in response items. The results are discussed in light of their implications for research on emotion perception.
Psychological theories of emotion have often defined an emotion as simultaneous changes in several mental and bodily components. In addition, appraisal theories assume that an appraisal component elicits changes in the other emotion components (e.g., motivational, behavioural, experiential). Neither the componential definition of emotion nor appraisal theory have been systematically translated to paradigms for emotion induction, many of which rely on passive emotion induction without a clear theoretical framework. As a result, the observed emotions are often weak. This study explored the potential of virtual reality (VR) to evoke strong emotions in ecologically valid scenarios that fully engaged the mental and bodily components of the participant. Participants played several VR games and reported on their emotions. Multivariate analyses using hierarchical clustering and multilevel linear modelling showed that participants experienced intense, multi-componential emotions in VR. We identified joy and fear clusters of responses, each involving changes in appraisal, motivation, physiology, feeling, and regulation. Appraisal variables were found to be the most predictive for fear and joy intensities, compared to other emotion components, and were found to explain individual differences in VR scenarios, as predicted by appraisal theory. The results advocate upgraded methodologies for the induction and analysis of emotion processes.
Most models of automatic emotion recognition use a discrete perspective and a black-box approach, i.e., they output an emotion label chosen from a limited pool of candidate terms, on the basis of purely statistical methods. Although these models are successful in emotion classification, a number of practical and theoretical drawbacks limit the range of possible applications. In this paper, the authors suggest the adoption of an appraisal perspective in modeling emotion recognition. The authors propose to use appraisals as an intermediate layer between expressive features (input) and emotion labeling (output). The model would then be made of two parts: first, expressive features would be used to estimate appraisals; second, resulting appraisals would be used to predict an emotion label. While the second part of the model has already been the object of several studies, the first is unexplored. The authors argue that this model should be built on the basis of both theoretical predictions and empirical results about the link between specific appraisals and expressive features. For this purpose, the authors suggest to use the component process model of emotion, which includes detailed predictions of efferent effects of appraisals on facial expression, voice, and body movements. DOI: 10.4018/jse.2012010102International Journal of Synthetic Emotions, 3(1), 18-32, January-June 2012 19 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Mehu, Pantic, & Scherer, in press). Current results are promising and we can expect that in the near future these systems will become fully reliable and perform in a satisfactory way. As the detection problem is getting solved, attention should now focus on what is the best model to attribute an emotional meaning 1 . Indeed, emotion recognition systems can be conceived as made of two parts, a detection component and an inference component. The detection component performs the analysis of the facial movements; the inference component outputs the attribution of an emotional meaning to the movements detected by the first component. While for the detection component there is one recognized standard (FACS), for the inference component we have to turn to emotion psychology where multiple theoretical models currently co-exist. Most researchers in affective computing choose a pragmatic approach and avoid theoretical controversies, but every system necessarily implies theoretical assumptions (Calvo & D'Mello, 2010). In the next paragraphs we will first present different theoretical models of emotion and discuss their use for automatic emotion recognition. The goal is not to provide an exhaustive review of the available systems, but rather a brief description of the pros and cons of each choice. We will then introduce a specific componential appraisal model of emot...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.