2017
DOI: 10.1002/hbm.23578
|View full text |Cite
|
Sign up to set email alerts
|

Decoding facial expressions based on face‐selective and motion‐sensitive areas

Abstract: Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
32
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 30 publications
(37 citation statements)
references
References 58 publications
5
32
0
Order By: Relevance
“…Furthermore, both damage to somatosensory cortex (Adolphs et al 2000) and its inactivation by transcranial magnetic stimulation (TMS; Pourtois et al 2004) impairs recognizing emotions from facial expressions, suggesting that somatomotor embodiment of seen emotions supports their recognition. In line with these results, emotional facial expressions can be successfully decoded from motor brain regions (Liang et al, 2017). Yet strong support for the embodied recognition view would require showing that i) both displaying and seeing different facial expressions would trigger expression-specific , discrete neural signatures in the somatomotor system and that ii) these expression-specific neural signatures would be corresponding when displaying and observing the expressions.…”
Section: Introductionsupporting
confidence: 59%
See 1 more Smart Citation
“…Furthermore, both damage to somatosensory cortex (Adolphs et al 2000) and its inactivation by transcranial magnetic stimulation (TMS; Pourtois et al 2004) impairs recognizing emotions from facial expressions, suggesting that somatomotor embodiment of seen emotions supports their recognition. In line with these results, emotional facial expressions can be successfully decoded from motor brain regions (Liang et al, 2017). Yet strong support for the embodied recognition view would require showing that i) both displaying and seeing different facial expressions would trigger expression-specific , discrete neural signatures in the somatomotor system and that ii) these expression-specific neural signatures would be corresponding when displaying and observing the expressions.…”
Section: Introductionsupporting
confidence: 59%
“…Said et al 2010; Peelen et al 2010; Harry et al 2013; Wegrzyn et al 2015), we also confirmed that viewing the facial expressions was associated with distinct expression-specific activation patterns, particularly in the fusiform and inferior occipital cortices, V5, and STS. However, seen expressions could also be successfully decoded from regional activation patterns in the somatosensory (see also Kragel & LaBar, 2016) and motor cortices (see also Liang et al, 2017), and components of the emotion circuit (amygdala, ACC), suggesting that expression-specific affective and somatomotor codes are also activated during facial expression perception.…”
Section: Discussionmentioning
confidence: 99%
“…Functional images were acquired using a 3.0 T Siemens scanner in Yantai Hospital Affiliated to Binzhou Medical University with a twenty-channel head coil. Foam pads and earplugs were used to reduce the head motion and scanner noise ( Liang et al, 2017 ). For functional scans, an echo-planar imaging (EPI) sequence was used (T2 ∗ weighted, gradient echo sequence), with the following parameters: TR (repetition time) = 2000 ms, TE (echo time) = 30 ms, voxel size = 3.1 mm × 3.1 mm × 4.0 mm, matrix size = 64 × 64, 33 axial slices, 0.6 mm slices gap, FA = 90°.…”
Section: Methodsmentioning
confidence: 99%
“…Behavioral studies have shown that the intact bodies can be visually perceived better than the body parts ( Soria Bauser and Suchan, 2013 ). However, the use of static and neutral images in previous studies has limited the interpretation of the data ( Liang et al, 2017 ). Thus, it remains unclear how the combination of faces and bodies is influenced by dynamic emotion information, which may activate just one specific network.…”
Section: Introductionmentioning
confidence: 99%
“…However, the coactivated patterns for multiple voxels can now be examined with the development of fMRI data analysis approaches. As compared with the traditional measure of the mean response magnitude, richer information on neural representations can be provided by the voxel-by-voxel activation patterns, and at a finer scale ( Haynes and Rees, 2006 ; Norman et al, 2006 ; Liang et al, 2017 ). The two scenarios suggest different predictions for the pattern associations.…”
Section: Introductionmentioning
confidence: 99%