Human neuroimaging research has transitioned from mapping local effects to developing predictive models of mental events that integrate information distributed across multiple brain systems. Here we review work demonstrating how multivariate predictive models have been utilized to provide quantitative, falsifiable predictions; establish mappings between brain and mind with larger effects than traditional approaches; and help explain how the brain represents mental constructs and processes. Although there is increasing progress toward the first two of these goals, models are only beginning to address the latter objective. By explicitly identifying gaps in knowledge, research programs can move deliberately and programmatically toward the goal of identifying brain representations underlying mental states and processes.
Understanding how emotions are represented neurally is a central aim of affective neuroscience. Despite decades of neuroimaging efforts addressing this question, it remains unclear whether emotions are represented as distinct entities, as predicted by categorical theories, or are constructed from a smaller set of underlying factors, as predicted by dimensional accounts. Here, we capitalize on multivariate statistical approaches and computational modeling to directly evaluate these theoretical perspectives. We elicited discrete emotional states using music and films during functional magnetic resonance imaging scanning. Distinct patterns of neural activation predicted the emotion category of stimuli and tracked subjective experience. Bayesian model comparison revealed that combining dimensional and categorical models of emotion best characterized the information content of activation patterns. Surprisingly, categorical and dimensional aspects of emotion experience captured unique and opposing sources of neural information. These results indicate that diverse emotional states are poorly differentiated by simple models of valence and arousal, and that activity within separable neural systems can be mapped to unique emotion categories.
A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This ‘new look’ into emotion representation promises to improve and reformulate neurobiological models of affect.
The medial frontal cortex (MFC), including anterior midcingulate cortex, has been linked to multiple psychological domains, including cognitive control, pain, and emotion. However, it is unclear whether this region encodes representations of these domains that are generalizable across studies and subdomains. Additionally, if there are generalizable representations, do they reflect a single underlying process shared across domains, or multiple domain-specific processes? We decomposed multivariate patterns of fMRI activity from 270 participants across 18 studies into study-specific, subdomain-specific, and domain-specific components, and identified latent multivariate representations that generalized across subdomains but were specific to each domain. Pain representations were localized to anterior midcingulate cortex, negative emotion representations to ventromedial prefrontal cortex, and cognitive control representations to portions of the dorsal midcingulate. These findings provide evidence for MFC representations that generalize across studies and subdomains, but are specific to distinct psychological domains rather than reducible to a single underlying process.
The role of inferior frontal cortex in coping with emotional distracters presented concurrently with a working memory task was investigated using event-related functional magnetic resonance imaging. The study yielded two main findings: (i) processing of emotional distracters was associated with enhanced functional coupling between the amygdala and the inferior frontal cortex and (ii) the inferior frontal cortex showed a left-lateralized activation pattern discriminating successful from unsuccessful trials in the presence of emotional distraction. These findings provide evidence that coping with emotional distraction entails interactions between brain regions responsible for detection and inhibition of emotional distraction, and identified a hemispheric specialization in the inferior frontal cortex in controlling the impact of distracting emotions on cognitive performance (left hemisphere) vs. controlling the subjective feeling of being distracted (right hemisphere).
While much research has elucidated the neurobiology of fear learning, the neural systems supporting the generalization of learned fear are unknown. Using functional magnetic resonance imaging (fMRI), we show that regions involved in the acquisition of fear support the generalization of fear to stimuli that are similar to a learned threat, but vary in fear intensity value. Behaviorally, subjects retrospectively misidentified a learned threat as a more intense stimulus and expressed greater skin conductance responses (SCR) to generalized stimuli of high intensity. Brain activity related to intensity-based fear generalization was observed in the striatum, insula, thalamus/periacqueductal gray, and subgenual cingulate cortex. The psychophysiological expression of generalized fear correlated with amygdala activity, and connectivity between the amygdala and extrastriate visual cortex was correlated with individual differences in trait anxiety. These findings reveal the brain regions and functional networks involved in flexibly responding to stimuli that resemble a learned threat. These regions may comprise an intensity-based fear generalization circuit that underlies retrospective biases in threat value estimation and overgeneralization of fear in anxiety disorders.
Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.
Experimental studies of conditioned learning reveal activity changes in the amygdala and unimodal sensory cortex underlying fear acquisition to simple stimuli. However, real-world fears typically involve complex stimuli represented at the category level. A consequence of category-level representations of threat is that aversive experiences with particular category members may lead one to infer that related exemplars likewise pose a threat, despite variations in physical form. Here, we examined the effect of category-level representations of threat on human brain activation using 2 superordinate categories (animals and tools) as conditioned stimuli. Hemodynamic activity in the amygdala and category-selective cortex was modulated by the reinforcement contingency, leading to widespread fear of different exemplars from the reinforced category. Multivariate representational similarity analyses revealed that activity patterns in the amygdala and object-selective cortex were more similar among exemplars from the threat versus safe category. Learning to fear animate objects was additionally characterized by enhanced functional coupling between the amygdala and fusiform gyrus. Finally, hippocampal activity co-varied with object typicality and amygdala activation early during training. These findings provide novel evidence that aversive learning can modulate category-level representations of object concepts, thereby enabling individuals to express fear to a range of related stimuli.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.