2016
DOI: 10.1016/j.neuroimage.2016.01.045
|View full text |Cite
|
Sign up to set email alerts
|

Face-selective regions differ in their ability to classify facial expressions

Abstract: Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

10
47
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(57 citation statements)
references
References 72 publications
10
47
0
Order By: Relevance
“…Our results are in line with previous fMRI MVPA studies demonstrating above‐chance expression decoding in all face‐selective regions (Wegrzyn et al, 2015) and particularly in the FFA, STS, and amygdala, in the absence of univariate effects (Zhang et al, 2016a). Notably, the latter found that the STS could classify neutral and emotional faces above chance, whereas here we show an advantage for angry expressions.…”
Section: Discussionsupporting
confidence: 93%
See 2 more Smart Citations
“…Our results are in line with previous fMRI MVPA studies demonstrating above‐chance expression decoding in all face‐selective regions (Wegrzyn et al, 2015) and particularly in the FFA, STS, and amygdala, in the absence of univariate effects (Zhang et al, 2016a). Notably, the latter found that the STS could classify neutral and emotional faces above chance, whereas here we show an advantage for angry expressions.…”
Section: Discussionsupporting
confidence: 93%
“…To move beyond the limitations of sensor‐space spatial inference in our MVPA analysis (including concerns of signal leakage, head motion and inter‐individual variability; Zhang et al, 2016), the data were projected into source space using the linearly constrained minimum variance (LCMV) beamformer (Hillebrand et al, 2005; Van Veen, van Drongelen, Yuchtman, & Suzuki, 1997). This approach combines the forward model and the data covariance matrix to construct an adaptive spatial filter.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…One main feature in these models is the separation of the processing of changeable and invariant aspects. Changeable aspects, (or motion: Bernstein & Yovel, 2015) such as expressions, are mainly processed in the dorsal stream, especially in the superior temporal sulcus (STS; Greening, Mitchell, & Smith, 2018;Said, Haxby, & Todorov, 2011;Zhang et al, 2016). In contrast, invariant aspects, such as identity, are processed in the ventral stream from part-based processing in the occipital face area (OFA; Atkinson & Adolphs, 2011;Henriksson, Mur, & Kriegeskorte, 2015;Pitcher, Walsh, & Duchaine, 2011), to the fusiform face area (FFA; Anzellotti, Fairhall, & Caramazza, 2014;Carlin & Kriegeskorte, 2017;Dobs, Schultz, Bülthoff, & Gardner, 2018;Kanwisher & Yovel, 2006), and finally to highestlevel, viewpoint-invariant processing in the ventral anterior temporal lobe (vATL; Anzellotti & Caramazza, 2016;Anzellotti et al, 2014;Collins & Olson, 2015;Kriegeskorte, Formisano, Sorger, & Goebel, 2007).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, a combination of machine‐learning algorithms and fMRI data was used to decode face‐selective regions. The study showed that only the amygdala and the posterior superior temporal sulcus (STS) accurately discriminated between neutral faces and emotional faces (Zhang, Japee, Nolan, Chu, Liu, & Ungerleider, ).…”
Section: Introductionmentioning
confidence: 99%