2020
DOI: 10.1016/j.neuroimage.2020.117258
|View full text |Cite
|
Sign up to set email alerts
|

Distinct dimensions of emotion in the human brain and their representation on the cortical surface

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
47
1

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 39 publications
(54 citation statements)
references
References 67 publications
5
47
1
Order By: Relevance
“…The significant positive correlation between the PG score and the CNN sequential layers indicates that the pathway for the vision-to-value transformation revealed by the IO-CNNs is aligned with the PG. Therefore, concrete, unimodal sensory information is likely to be sequentially transformed into abstract, transmodal cognitive representations through hierarchical processing along the PG, confirming the meta-analysis in ( 13 ) and studies on intracranial electric stimulation ( 22 ) and task-related fMRI scans based on encoding models ( 23, 24 ). These results also indicate that the hierarchical vision-to-value transformation is a global process involving areas on the whole brain.…”
Section: Resultssupporting
confidence: 56%
“…The significant positive correlation between the PG score and the CNN sequential layers indicates that the pathway for the vision-to-value transformation revealed by the IO-CNNs is aligned with the PG. Therefore, concrete, unimodal sensory information is likely to be sequentially transformed into abstract, transmodal cognitive representations through hierarchical processing along the PG, confirming the meta-analysis in ( 13 ) and studies on intracranial electric stimulation ( 22 ) and task-related fMRI scans based on encoding models ( 23, 24 ). These results also indicate that the hierarchical vision-to-value transformation is a global process involving areas on the whole brain.…”
Section: Resultssupporting
confidence: 56%
“…Amusement Ratings, audience laughter Moran et al, 2004;Goldin et al, 2005;Sawahata et al, 2013;Jääskeläinen et al, 2016;Iidaka, 2017;Tu et al, 2019 Raz et al, 2012Raz et al, , 2014Raz et al, , 2016Schlochtermeier et al, 2017;Sachs et al, 2020 Suspense Ratings Naci et al, 2014;Lehne et al, 2015 Basic emotions Ratings Lettieri et al, 2019 Other emotion categories Ratings Horikawa et al, 2020;Koide-Majima et al, 2020;Chang et al, 2021 Emotional alignment…”
Section: Emotion Categoriesmentioning
confidence: 99%
“…Human ratings are currently the main method for accessing and modeling observer's emotional experiences. Most naturalistic studies in affective neuroimaging have collected self-report ratings of some experienced aspect of emotions, including affective dimensions (valence and arousal: Wallentin et al, 2011;Nummenmaa et al, 2012Nummenmaa et al, , 2014bYoung et al, 2017;Smirnov et al, 2019;Gruskin et al, 2020; multiple affective dimensions: Horikawa et al, 2020;Koide-Majima et al, 2020) or intensity of categorical emotion experiences (Goldin et al, 2005;Raz et al, 2012;Naci et al, 2014;Jacob et al, 2018;Lettieri et al, 2019;Horikawa et al, 2020;Hudson et al, 2020). Furthermore, also interoceptive observer features including moments of acute fear onset leading to startle reflex (Hudson et al, 2020) have been extracted with ratings.…”
Section: Emotion Categoriesmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, using dimension reduction techniques, they assess the representational similarities of the modelled dimensions, or the representational similarities of the brain activation patterns associated with each dimension. Such semantic brain representations have been mapped for visual objects and actions 15 , language 16 , and experienced emotions 17 . A recent study found that language based representations of social features converge with the visual representations on the border of occipital cortex and that language based representations are located anterior to the visual representations 18 .…”
Section: Introductionmentioning
confidence: 99%