While it is possible to observe when another person is having an emotional moment, we also derive information about the affective states of others from what they tell us they are feeling. In an effort to distill the complexity of affective experience, psychologists routinely focus on a simplified subset of subjective rating scales (i.e., dimensions) that capture considerable variability in reported affect: reported valence (i.e., how good or bad?) and reported arousal (e.g., how strong is the emotion you are feeling?). Still, existing theoretical approaches address the basic organization and measurement of these affective dimensions differently. Some approaches organize affect around the dimensions of bipolar valence and arousal (e.g., the circumplex model; Russell, 1980), whereas alternative approaches organize affect around the dimensions of unipolar positivity and unipolar negativity (e.g., the bivariate evaluative model; Cacioppo & Berntson, 1994). In this report, we (1) replicate the data structure observed when collected according to the two approaches described above, and re-interpret these data to suggest that the relationship between each pair of affective dimensions is conditional on valence ambiguity; then (2) formalize this structure with a mathematical model depicting a valence ambiguity dimension that decreases in range as arousal decreases (a triangle). This model captures variability in affective ratings better than alternative approaches, increasing variance explained from ~60% to over 90% without adding parameters.
The aim of this review is to show the fruitfulness of using images of facial expressions as experimental stimuli in order to study how neural systems support biologically relevant learning as it relates to social interactions. Here we consider facial expressions as naturally conditioned stimuli which, when presented in experimental paradigms, evoke activation in amygdala–prefrontal neural circuits that serve to decipher the predictive meaning of the expressions. Facial expressions offer a relatively innocuous strategy with which to investigate these normal variations in affective information processing, as well as the promise of elucidating what role the aberrance of such processing might play in emotional disorders.
Human emotions unfold over time, and more affective computing research has to prioritize capturing this crucial component of real-world affect. Modeling dynamic emotional stimuli requires solving the twin challenges of time-series modeling and of collecting high-quality time-series datasets. We begin by assessing the state-of-the-art in time-series emotion recognition, and we review contemporary time-series approaches in affective computing, including discriminative and generative models. We then introduce the first version of the Stanford Emotional Narratives Dataset (SENDv1): a set of rich, multimodal videos of self-paced, unscripted emotional narratives, annotated for emotional valence over time. The complex narratives and naturalistic expressions in this dataset provide a challenging test for contemporary time-series emotion recognition models. We demonstrate several baseline and state-of-the-art modeling approaches on the SEND, including a Long Short-Term Memory model and a multimodal Variational Recurrent Neural Network, which perform comparably to the human-benchmark. We end by discussing the implications for future research in time-series affective computing.recognition. Specifically, we define time-series modeling as taking in temporally continuous input data and producing temporally continuous output, with an explicit consideration of how information is propagated over time. For instance, in order to engage in such inference, a social robot in conversation with its user would have to take in a continuous stream of sensor data, process them, and reason about their user's emotions over time, perhaps after every second or after every sentence, as well as across many sentences in the conversation and across multiple conversations [11].Despite the progress that has been made in time-series emotion recognition in the past decade, the field is still far from affective robots that can understand human emotions in daily life. What is needed to achieve this ambitious goal? We suggest that the biggest barriers to overcome are due to (1) the inherent difficulty of building computational time-series models, and (2) the difficulty of collecting highquality datasets. To address this first gap, we conduct a review covering different machine-learning-based approaches to time-series modeling (Section 2). We begin by discussing the most common time-series techniques in affective computing: deep neural network models, part of a broader class of discriminative models. We also cover generative time-series approaches, which are comparatively less popular within affective computing, but offer interesting modeling capabilities and hold exciting potential for emotion understanding.We turn next to discuss the second gap: Researchers need high-quality time-series datasets on which to train models. These are expensive to construct, in terms of both the production of stimuli and the collection of timeseries annotations of emotion and affective labeling [12]. There are several existing time-series datasets that have been used by the ...
Anxiety impacts the quality of everyday life and may facilitate the development of affective disorders, possibly through concurrent alterations in neural circuitry. Findings from multimodal neuroimaging studies suggest that trait-anxious individuals may have a reduced capacity for efficient communication between the amygdala and the ventral prefrontal cortex (vPFC). A diffusion-weighted imaging protocol with 61 directions was used to identify lateral and medial amygdala-vPFC white matter pathways. The structural integrity of both pathways was inversely correlated with self-reported levels of trait anxiety. When this mask from our first dataset was then applied to an independent validation dataset, both pathways again showed a consistent inverse relationship with trait anxiety. Importantly, a moderating effect of sex was found, demonstrating that the observed brain-anxiety relationship was stronger in females. These data reveal a potential neuroanatomical mediator of previously documented functional alterations in amygdala-prefrontal connectivity that is associated with trait anxiety, which might prove informative for future studies of psychopathology.
Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving this neural valence response.
Oversensitivity to uncertain future threat is usefully conceptualized as intolerance of uncertainty (IU). Neuroimaging studies of IU to date have largely focused on its relationship with brain function, but few studies have documented the association between IU and the quantitative properties of brain structure. Here, we examined potential gray and white matter brain structural correlates of IU from 61 healthy participants. Voxel-based morphometric analysis highlighted a robust positive correlation between IU and striatal volume, particularly the putamen. Conversely, tract-based spatial statistical analysis showed no evidence for a relationship between IU and the structural integrity of white matter fiber tracts. Current results converge upon findings from individuals with anxiety disorders such as obsessive-compulsive disorder (OCD) or generalized anxiety disorder (GAD), where abnormally increased IU and striatal volume are consistently reported. They also converge with neurobehavioral data implicating the putamen in predictive coding. Most notably, the relationship between IU and striatal volume is observed at a preclinical level, suggesting that the volumetric properties of the striatum reflect the processing of uncertainty per se as it relates to this dimensional personality characteristic – such a relationship could then potentially contribute to the onset of OCD or GAD, rather than being unique to their pathophysiology.
Valence is a principal dimension by which we understand emotional experiences, but oftentimes events are not easily classified as strictly positive or negative. Inevitably, individuals vary in how they tend to interpret the valence of ambiguous situations. Surprised facial expressions are one example of a well-defined, ambiguous affective event that induces trait-like differences in the propensity to form a positive or negative interpretation. To investigate the nature of this affective bias, we asked participants to organize emotional facial expressions (surprised, happy, sad) into positive/negative categories while recording their hand-movement trajectories en route to each response choice. We found that positivity-negativity bias resulted in differential hand movements for modal versus non-modal response trajectories, such that when an individual categorized a surprised face according to his or her non-modal interpretation (e.g., a negatively biased individual selecting a positive interpretation), the hand showed an enhanced spatial attraction to the alternative, modal response option (e.g., negative) in the opposite corner of the computer screen (Experiment 1). Critically, we also demonstrate that this asymmetry between modal versus non-modal response trajectories is mitigated when the valence interpretations are made under a cognitive load, although the frequency of modal interpretations is unaffected by the load (Experiment 2). These data inform a body of seemingly disparate findings regarding the effect of cognitive effort on affective responses, by showing within a single paradigm that varying cognitive load selectively alters the dynamic motor movements involved in indicating affective interpretations, whereas the interpretations themselves remain consistent across variable cognitive loads.
The events we experience day to day can be described in terms of their affective quality: some are rewarding, others are upsetting, and still others are inconsequential. These natural distinctions reflect an underlying representational structure used to classify the affective quality of events. In affective psychology, many experiments model this representational structure with two dimensions, using either the dimensions of valence and arousal, or alternatively, the dimensions of positivity and negativity. Using an fMRI dataset, we show that these affective dimensions are not strictly linear combinations each other, and show that it is critical that all four dimensions be used to examined the data. Our findings include (1) a gradient representation of valence anatomically organized along the fusiform gyrus, and (2) distinct subregions within bilateral amygdala tracking arousal versus negativity. Importantly, these patterns would have remained concealed had either of the prevailing 2-dimensional approaches been adopted a priori.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.