2019
DOI: 10.1101/869578
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decoding of emotion expression in the face, body and voice reveals sensory modality specific representations

Abstract: 1 0 1 1 1 2 2 ABSTRACT 1 3A central issue in affective science is whether the brain represents the emotional expressions of 1 4 faces, bodies and voices as abstract categories in which auditory and visual information converge 1 5 3 1 are recognized effortlessly and responded to spontaneously when rapid adaptive actions are 3 2 required. The specifics of the subjective experience in the natural environment determine which 3 3 affective signal dominates and triggers the adaptive behavior. Rarely are the face, th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…The anatomical labelling of the resulting clusters was performed according to17 the atlas of Duvernoy(Duvernoy, 1999) for a more reliable localization. Univariate results are 18 not reported in this paper, but seeVaessen et al (2019) orFigure SR3inSupplementary 19…”
mentioning
confidence: 85%
“…The anatomical labelling of the resulting clusters was performed according to17 the atlas of Duvernoy(Duvernoy, 1999) for a more reliable localization. Univariate results are 18 not reported in this paper, but seeVaessen et al (2019) orFigure SR3inSupplementary 19…”
mentioning
confidence: 85%
“…This improvement in prediction suggests that posterior STS encodes both types of representations: those that are more specific to the visual modality, as well as those that are more abstract and tied to emotion categories. Debate continues over which set of variables are encoded in this region-whether it is specialized for processing facial movement or whether it takes modality-specific information as input and maps it to supramodal representations of emotion categories 16,47 . The results of our study support the idea that neural populations in posterior STS encode both types of representations: those associated with specific modalities and supramodal abstract emotion categorizations.…”
Section: Discussionmentioning
confidence: 99%
“…Our results additionally shed light on the role of posterior STS in face processing. Debate continues over which set of variables are encoded in this region-whether it is specialized for processing facial movement or whether it takes modality-specific information as input and maps it to supramodal representations of emotion categories 17,70 . Across two studies we found that abstract emotion features derived from both facial expressions and visual context predict unique 21 components of posterior STS activity; that is, the joint encoding model based on late layers from both EmoFAN and EmoNet predicted a greater proportion of posterior STS activity than either model alone.…”
Section: Discussionmentioning
confidence: 99%
“…Los movimientos de los músculos faciales parecen acompañar a los estados afectivos, lo que implica que estas expresiones se pueden relacionar con las emociones discretas (De Gelder, 2006;Ko, 2018; ENSEÑANZA DE LAS CIENCIAS, 42-1 (2024), 23-42 Vaessen et al, 2019). En este sentido, disponemos de un acceso fiable y coherente al componente emocional a través de las expresiones faciales.…”
Section: Expresiones Faciales Y Detección De Emociones (Fer-ai)unclassified