2012
DOI: 10.1002/hbm.22128
|View full text |Cite
|
Sign up to set email alerts
|

Internal representations for face detection: An application of noise‐based image classification to BOLD responses

Abstract: What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
10
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 89 publications
2
10
0
Order By: Relevance
“…Interestingly, the FFA did not support similar results. However, recent work has shown that the FFA is particularly sensitive to templates driving face detection (36) and can even support the visual reconstruction of such templates (27). Thus, the current results agree with the involvement of the FFA primarily in face detection and, only to a lesser extent, in identification (37,38).…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…Interestingly, the FFA did not support similar results. However, recent work has shown that the FFA is particularly sensitive to templates driving face detection (36) and can even support the visual reconstruction of such templates (27). Thus, the current results agree with the involvement of the FFA primarily in face detection and, only to a lesser extent, in identification (37,38).…”
Section: Discussionsupporting
confidence: 81%
“…First, for each dimension, we subtracted each corresponding template from its counterpart, thereby obtaining another template akin to a classification image (CI) (26)(27)(28)-that is, a linear estimate of the image-based template that best accounts for identity-related scores along a given dimension (Methods). Then, this template was assessed pixel-by-pixel with respect to a randomly generated distribution of templates (i.e., by permuting the scores associated with facial identities) to reveal pixel values lower or higher than chance (two-tailed permutation test; FDR correction across pixels, q < 0.05).…”
Section: Derivation Of Facial Features Underlying Face Spacementioning
confidence: 99%
“…However, in the strategic attention task, we observed that the pars orbitalis, superior parietal lobule and the cuneus formed part of the temporal core. Another noteworthy finding is the stable identification of the fusiform gyrus as a temporal core region across time scales in the recognition memory task with faces, a region well known for its role in face detection (Kanwisher et al, 1997, Nestor et al, 2013). These results suggest that while some regions are engaged in task-general dynamic roles, other regions are engaged in task-specific dynamic roles as either flexible periphery areas or stable core areas, respectively (Fedorenko and Thompson-Schill, 2014).…”
Section: Resultsmentioning
confidence: 93%
“…It could be that neutral faces are more potent than angry ones because they contain more canonical or diagnostic facial features (Guo & Shaw, 2015;Nestor, Vettel, & Tarr, 2013). Sets of facial features that are seen more frequently are encoded more robustly, and therefore could be more diagnostic for face detection (Nestor et al, 2013). Stronger capture by neutral faces than by angry ones may also suggest avoidance.…”
Section: Discussionmentioning
confidence: 99%
“…Oculomotor capture must also be partly driven by low-level visual features though, rather than by affective content (or lack thereof), because neutral inverted faces still captured the eyes significantly more often than either butterflies or angry faces (see also Bindemann & Burton, 2008;Laidlaw et al, 2015). It could be that neutral faces are more potent than angry ones because they contain more canonical or diagnostic facial features (Guo & Shaw, 2015;Nestor, Vettel, & Tarr, 2013). Sets of facial features that are seen more frequently are encoded more robustly, and therefore could be more diagnostic for face detection (Nestor et al, 2013).…”
Section: Discussionmentioning
confidence: 99%